The information security guideline must cover the essential requirements for information security and data protection (all core topics must be - if applicable - described in this guideline) and should be based on an existing standard (e.g. ISO 27002, NIST 800, BSI IT baseline protection, IT security manual of the WKO, etc.) The guideline must be approved by the management and must be available to employees.
The training must cover the topics of the information security policy and address current cyber threats. The topics must cover at least the following topics:
- Secure handling of computers and information
- Correct selection and management of passwords
- Internet Security
- E-mails, Spam and Phishing
- Dangerous malware
- Response to suspected IT security incidents
A complete training must take place at least upon entry and updated information must be communicated at least every two years.
There must be at least one named person who is responsible for the topic of information security, i.e. who creates the guidelines and takes care of the implementation of the measures and is given the necessary time to do so. This person must have the necessary basic technical knowledge on the topics. This activity can be carried out in addition to other activities or can be performed by external persons on behalf of the company.
There must be a directory of all IT assets (systems, services) used. This directory must contain at least the name and version of the system and the person responsible for it.
- Access to both applications and file systems must be regulated. Correctly set permissions must ensure that only those persons who have a need to do based on their job profile can access it.
- There must be a process for granting and removal of access rights.
There must be clearly described minimum criteria for passwords, which implement the recommendations of current standards (password strength, two-factor authentication where necessary and appropriate, separation of passwords, etc.) Reference: BSI, NIST 800, etc.
There must be a document that describes the requirements for the safe configuration of the systems used. References to manufacturer recommendations are sufficient. These settings must also be actually implemented on all devices used - as far as technically possible. Alternatively there is a mandatory vulnerability scan before setting the device productive.
Individual software (e.g. customised open source software, but not standard software) that can be accessed from the Internet must be checked for vulnerabilities by means of a penetration test at least before it goes live.
- Regularly update the systems with updates provided by the manufacturer. No system update must be more than one quarter overdue (unless there is a documented reason why an update cannot be applied)
- Systems that are no longer supplied with security updates by the manufacturer will be taken out of service on time respectively there is a defined exception process and a documented exception list.
A network segmentation device (e.g. firewall, router, etc.) is used, which filters the network traffic with the Internet on the basis of rules set as restrictive as possible.
At least an up-to-date anti-virus software must be in use, which continuously checks the systems and files for malware. In case of suspicion, an alert is created in the organisation.
There should be the possibility to transfer files encrypted, either by eMail (e.g. S/MIME, PDF encrypted, mandatory enforced TLS, etc.) or by encrypted upload. Forms on the website are uploaded exclusively via https.
- At least the standard protocols of the operating systems must be activated. The protocols must be available to the company.
- There is an overview of all active system logs and their location.
- The records are kept for at least three months.
The emergency plan including backup concept must describe how to react to a serious IT security incident. Serious security incidents are for example:
- Outage of the systems,
- Malware infections (incl. cryptolocker),
- Data leakage
The plans must be tested at least every two years.
A vulnerability scanning tool must be in place and must be used at least once a month.
There is a policy for secure software development, which includes security requirements, secure coding rules and a test concept. For the purchase of software there is a security requirements list and a vendor risk analysis process.
- At least every two years, penetration tests are performed to check vulnerabilities of the organisation.
- Based on the identified weaknesses there are measures identified and implemented.
At least an intrusion detection / prevention system must be in use, which can identify ´ suspected unauthorized activities in the network either via a baselining approach or via heuristic processes or machine learning.
A mechanism must be active on all systems (clients/servers) that allows only approved processes and applications to run.
- An identity and access management system is in use, which makes all identities and their authorizations clearly traceable on a individual user basis.
- Authorization management must also include administrative authorizations and authorizations for access to customer systems.
A SIEM is in use to which at least the critical network and security systems are connected and whose log files are continuously correlated and analyzed for irregularities.
- Employees with proven qualifications in the area of IT security must be employed by the company, or there must be an SLA with a corresponding company that will take over ongoing monitoring.
- Suspicious cases must be investigated and in case of confirmed incidents an alert must be issued and - if relevant - affected customers must be informed.
Employees with proven qualifications in the areas sophisticated incident response and IT forensics must be employed by the company or there must be an SLA with a corresponding company, or access to such a company must be covered by cyber insurance.
The resilience concept must include preventive and reactive measures to be able to react to serious security incidents and thus ensure business continuity. Serious security incidents include:
- System outage
- Malware infection (incl. cryptolocker) and
- Data leakage
- Targeted hacking attacks (e.g. APTs)
When running critical services in the cloud, these measures and tests must be verified by the cloud operator (e.g. via ISAE 3402 reports). Tests must be performed at least once a year and necessary improvement measures must be implemented.
There must be a documented process which assures from the initial selection phase and continuously, that critical suppliers manage their information security and business continuity management risks adequately.