What is the Model for Computer Security?

Hardware 

Including computer systems and other data processing, data storage, and data communications devices

Software

includes the operating system, system utilities and its applications Data including files and databases, and security-related data such as password files.


What is the Model for Computer Security
 Security Concepts and Relationships


Computer Security Terminology

Adversary (threat agent)

An entity that attacks, or is a threat to, a system. 

Attack

An attack on system security emanating from a smart threat; that is, an intelligent act that is a willful attempt (particularly in the sense of a method or technique) to circumvent security services and violate a system's security policy.

Countermeasure

An action, device, procedure, or technique that reduces a threat,  vulnerability, or  attack by eliminating or preventing it,  minimizing the harm it can cause, or detecting and reporting it Corrective action can be taken.

Risk

An expectation of loss expressed as the probability that a given threat will exploit a given vulnerability with a given malicious outcome.

Security Policy

Data contained in an information system; or a service provided by a system; or a system capability, such as B. processing power or communication bandwidth; or an item of system equipment (ie a system component: hardware, firmware, software or documentation); or a facility housing system operations and equipment.

System Resource (Asset)

The potential for a security breach that exists when there is a circumstance, ability, action, or event that could breach security and cause harm. That is, a threat is a potential danger that could exploit a weakness 

Threat

An error or weakness in the design, implementation, or operation and administration of a system that could be exploited to violate the system's security policy


Communications Equipment and Networks: Local and wide area network communication links, bridges, routers, etc.

In the context of security, we address vulnerabilities in system resources. [NRC02] lists the following general categories of vulnerabilities for a computer system or network resource:

 • It can be corrupted, causing it to do something wrong  or provide incorrect answers. For example, stored data values ​​may differ from what they should be because they have been incorrectly modified.  

• May leak. For example, someone who should not have access to some or all of the information available over the network is granted that access.

• May become unavailable or very slow. That is, using the system or the network becomes impossible or impractical.


These three general types of vulnerabilities correspond to the concepts of integrity, confidentiality, and availability listed earlier in this section. The different types of vulnerabilities for a system resource correspond to threats that can exploit those vulnerabilities. A threat represents potential damage to the security of an asset. An attack is a threat that is executed (threat action) and, if successful, results in an unintended security breach or threat outcome. The agent performing the attack is called the attacker or threat agent.We can distinguish between the types of attacks:


Active attack: An attempt to alter system resources or affect their operation.

Passive attack: An attempt to learn or make use of information from the system that does not affect system resources. We can also classify attacks based on the origin of the attack.

Inside attack: Initiated by an entity inside the security perimeter (an “insider”). The insider is authorized to access system resources but uses them in a way not approved by those who granted the authorization. 

Outside attack: Initiated from outside the perimeter, by an unauthorized or illegitimate user of the system (an “outsider”). On the Internet, potential outside attackers range from amateur pranksters to organized criminals, inter-national terrorists, and hostile governments


Finally, a countermeasure is any means taken to deal with a security attack. Ideally, a countermeasure can be devised to prevent a particular type of attack from being successful. When prevention is not possible or fails in any case, the goal is to detect the attack and then recover from the effects of the attack. A countermeasure can itself introduce new vulnerabilities. In both cases, residual vulnerabilities may remain after countermeasures are imposed.Such vulnerabilities can be exploited by threat actors that pose residual  risk to  assets. Owners will try to minimize this risk given other restrictions.

The challanges of computer security

 

Challanges of computer security
Challanges of computer security 


Computer security is both fascinating and complex. Some of the reasons follow

  • Computer security is not as easy as it might  seem to a beginner. The requirements seem simple; In fact, most of the key security service requirements can be summed up in self-explanatory one-word terms: confidentiality, authentication, non-repudiation, integrity. But the mechanisms used to meet these requirements can be quite complex, and understanding them can require quite subtle thought.
  • When developing a particular security mechanism or algorithm, one should always consider possible attacks on these security features. In many cases, successful attacks are designed to look at the problem in a completely different way and therefore exploit an unexpected weakness in the mechanism.
  • Due to point 2, the procedures for providing specific services are often contradictory. Typically, a security mechanism is complex and it is not obvious from the specification of a specific requirement that such extensive measures are required. Comfortable security mechanisms only make sense when the various aspects of the threat are considered.
  • Once various security mechanisms have been designed, it is necessary to decide where to use them. This applies both in terms of physical location (e.g. where in a network  certain security mechanisms are needed) and in a logical sense[e.g., at what layer or layers of an architecture such as TCP/IP(Transmission Control Protocol/Internet Protocol) should mechanisms be placed.
  • Security mechanisms generally involve more than a specific algorithm or protocol. They also require  participants to be in possession of certain secret information (such as an encryption key), raising questions about the creation, distribution, and protection of that secret information.
  • There may also be a dependency on communication protocols, the behavior of which may complicate the development of  the security mechanism.If, for example, the proper functioning of the security mechanism requires the imposition of time constraints on the transmission time of a message from the sender to the receiver,  any protocol or network that introduces variable and unpredictable delays may cause such time constraints to be of importance.
  • Computer security is essentially a battle  between an attacker trying to find holes and the designer or administrator trying to fix them. only weakness while the designer must find and eliminate all weakness to achieve perfect security.
  •  There is a natural tendency for users and system administrators to get little value from security investments until a security failure occurs.
  • Security requires regular, even constant, monitoring, and this is difficult in today's overload short-term environment.
  • Security is still too often an afterthought to be built into a system after the design is complete, rather than being an integral part of the design process
  • Many users and even security administrators see strong security as a barrier to the efficient and user-friendly operation of an information system or the use of information.
  • The difficulties just enumerated will be encountered in numerous ways as we examine the various security threats and mechanisms


Introduction to Computer Security.

 A Definition of Computer Security

Introduction to computer Security
Figure:1


Computer Security

The protection provided to an automated information system to achieve the appropriate objectives of maintaining the integrity, availability, and confidentiality of information system resources (including hardware, software, firmware, information/data, and telecommunications


This definition introduces three key objectives that are at the heart of computer security

Confidentiality: This term covers the two related concepts 

Data confidentiality: Ensures that private or confidential information is not made available or disclosed to unauthorized persons

Privacy Policy: ensures that individuals can control or influence what information is collected and stored about them, and by whom and with whom that information may be share


Integrity: This term covers the two related concepts

Data integrity: ensures that information and programs are only changed in a specified and authorized manner

System Integrity: ensures that a system is performing its intended function unimpeded, free from intentional or unintentional tampering with the syste

Availability: Ensures systems are up and running promptly and that authorized users are not denied service 


These three concepts form what is often referred to as the CIA triad. The three concepts embody basic security goals for data and information as well as for IT services. For example N

 RFC 2828 defines information as "facts and ideas that can be represented (encoded) as various forms of data" and data as "information in a specific physical representation, usually a sequence of symbols, carrying a meaning; specifically a Representation of information that can be processed or produced by a computer

FIPS 199 (Federal Information Security Categorization Standards and Information Systems) lists confidentiality, integrity and availability as the three security objectives for information and  information systems. FIPS PUB 199 provides a useful characterization of these three objectives in terms of requirements and the definition of a  security breach in each catego


Confidentiality: Upholding authorized restrictions on access and disclosure of information, including means of protecting  privacy and proprietary information. A breach of confidentiality is the unauthorized disclosure of informat

Integrity: Protecting against improper alteration or destruction of information, including ensuring the non-repudiation and authenticity of information. A loss of integrity is the unauthorized alteration or destruction of informati

Availability: Ensure timely and reliable access  and use of information. A loss of availability is the interruption of access to or use of information or an information syst


Although the CIA's use of the  triad to define security objectives is well established, some in the security community feel that additional concepts are needed to paint a complete picture. Two of the most frequently mentioned are the followin

Authenticity: The quality of being genuine and verifiable and trustworthy; Confidence in the validity of a transmission, a message, or the originator of the message. This means verifying that users are who they say they are and that any input that goes into the system is from a trusted sourc

Accountability: The security goal that drives the requirement that an entity's actions be traced back only to that entity. This supports non-repudiation, deterrence, fault isolation, intrusion detection and prevention, and recovery from action and legal action. Since truly secure systems are not yet an achievable goal, we need to be able to trace a security breach back to the perpetrator. Systems must keep logs of their activities so that later forensics can track security breaches or help with transaction disput


Note that FIPS PUB 199 includes authenticity under integrit




What is Artificial intelligence?

Although we have stated that AI is exciting, we have not defined it. Figure 1.1 depicts eight AI definitions organized in two dimensions. The definitions at the top and bottom focus on thinking and reasoning, while the ones at the bottom focus on behavior.

What is Artificial intelligence?
Figure 1.1
Definitions on the passed on measure outcome regarding constancy to human execution, whereas the ones on the right measure against an ideal presentation measure, called soundness. Given what it knows, a system is rational if it does the "right thing."


Acting humanly: The Turing Test approach

The Turing Test, which is Alan Turing proposed in 1950, was made to give a good operational definition of intelligence. If a human interrogator, after asking some written questions, can't tell if the written responses are from a person or a computer, the computer has passed the test.The specifics of the test and the question of whether a computer would truly be if it passed are discussed in upcoming articles.For the time being, we should note that programming a computer to pass a rigorous test gives us a lot to work on. The following features would be required of the computer:

natural language processing so that it can effectively communicate in English; NATURAL LANGUAGE PROCESSING 

Representation of knowledge to store what it hears or knows; KNOWLEDGE REPRESENTATION

Automated reasoning that makes use of the stored information to answer questions and reach new conclusion

Automated mechine learning can recognize and extrapolate patterns and adapt to new conditions.

Total turning test

Turing's test deliberately avoided direct physical interaction between the interrogator and the computer.Because it is unnecessary for intelligence to physically simulate a person, However, the "total Turing test," also known as the "total Turing test," offers the interrogator the chance to pass physical objects "through the hatch" and a video signal to assess the subject's perceptual abilities. To breeze through the complete Turing Assessment, the computer will require

Computer version and robotics

computer vision to perceive objects, and robotics to manipulate objects and move about.

The majority of AI is made up of these six fields, and Turing deserves credit for creating a test that still holds up 60 years later. However, AI researchers have not put much effort into passing the Turing Test because they believe that studying the underlying principles of intelligence is more important than replicating an example. When the Wright brothers and others started using wind tunnels and learning about aerodynamics, the search for "artificial flight" became successful. The creation of "machines that fly so exactly like pigeons that they can fool even other pigeons" is not defined as the objective of the field of aerospace engineering in textbooks.


Thinking humanly: The cognitive modeling approach

We must be able to determine how humans think in order to claim that a given program thinks like a human. We need to get inside the minds of real people.This can be accomplished in three ways: through reflection, in the effort to record our own thoughts as they occur; through psychological experiments, such as observing an individual's behavior; and by observing the brain in action through brain imaging. 

It becomes possible to express the theory as a computer program once we have a sufficiently precise theory of the mind. There is evidence that some of the program's mechanisms may also be operating in humans if the program's input–output behavior matches human behavior. For instance, Allen Newellfurthermore, Herbert Simon, who created GPS, the "General Issue Solver" (Newell and Simon,1961), were not content simply to have their program take care of issues accurately. 

They were more concerned with comparing its reasoning steps to those of human subjects solving the same problems in cognitive science. Computer models from artificial intelligence and psychological experimentation are combined in the interdisciplinary field of cognitive science to develop precise and testable theories of the human mind.

According to Wilson and Keil (1999), cognitive science is a fascinating field that merits several textbooks and at least one encyclopedia. We will occasionally discuss the similarities and differences between human cognition and AI techniques. However, the foundation of genuine cognitive science must be actual human or animal experiments. Since we presume that the reader only has a computer for experimentation, we will leave that for other books.

In the early days of AI, the approaches were frequently misunderstood: a creatorwould contend that a calculation performs well on an undertaking and that it is in this manner a decent modelof human execution, or the other way around. Modern authors distinguish between the two types of claim.

Because of this distinction, AI and cognitive science have been able to advance at a faster rate. In computer vision, which incorporates neurophysiological evidence into computational models, the two fields continue to fertilize each other.



Thinking rationally: The “laws of thought” approach

One of the first to attempt to codify "right thinking," also known as processes of irrefutable reasoning, was the Greek philosopher Aristotle. His syllogisms provided a pattern for argument structures that, when given the right premises, always led to the right conclusions. For instance, "Socrates is a man; Men are all mortal; As a result, Socrates dies. The mind's operation was supposed to be governed by these laws of thought; The discipline of logic was founded on their research.

In the 19th century, logicians created a precise notation for statements about all kinds of world objects and their relationships. This is in contrast to standard arithmetic notation, which only allows for statements about numbers.) By 1965, there were programs that could theoretically solve any logically solvable problem. However, if there is no solution, the program might continue to loop forever.) The purported logicist custom inside.

man-made reasoning desires to expand on such projects to make canny frameworks.This method faces two primary challenges. First, expressing informal knowledge in the formal terms required by logical notation is difficult, especially when the knowledge is less than 100% certain. Second, there is a major contrast between settling an issue "on a basic level" and settling it practically speaking. 

Any computer's computational resources can be depleted by problems with just a few hundred facts if it doesn't know which reasoning steps to try first. Even though these two obstacles are applicable to any attempt to construct computational reasoning systems, the logicist tradition was the first to encounter them.


Acting rationally: The rational agent approach

A specialist is simply something that demonstrations (specialist comes from the Latin agere, to do). Of course, every computer program does something, but more is expected of computer agents: operate independently, perceive their surroundings, persist over time, adjust to change, and set and pursue goals. When there is uncertainty, the best expected outcome is the goal of a rational agent, who acts accordingly.

In the "laws of thought" approach to AI, correct inferences were emphasized. Being a rational agent sometimes requires drawing accurate inferences because one way to act rationally is to reason logically to the conclusion that a particular action will achieve one's goals and then to act on that conclusion. However, correct inference requires more than rationality; There are times when there is nothing that can be proven to be right, but something still needs to be done.

 In addition, there are rational actions that do not require inference. Recoiling from a hot stove, for instance, is a reflexive action that usually succeeds better than a slower one taken after careful consideration.

An agent is able to act rationally because it possesses all of the skills required for the Turing Test. Agents are able to make sound decisions through knowledge representation and reasoning. To survive in a complex society, we need to be able to construct natural language sentences that others can understand. Not only do we need to learn to be smart, but learning also makes it easier for us to make good decisions.

In comparison to the other approaches, the rational-agent approach has two advantages. First, it is more general than the "laws of thought" method because rationality can be achieved through a variety of means, and correct inference is just one of them. Second, it makes it easier to logical advancement than are approaches in view of human way of behaving or human idea. It is possible to "unpack" agent designs that provably achieve the standard of rationality, which is mathematically well defined and completely general. 

Human way of behaving, on the otherhand, is all around adjusted for one unambiguous climate and is characterized by, indeed, the entiretyof the multitude of things that people do. As a result, the primary focus of this book is on the general principles and components of rational agents. We will discover that, despite the apparent simplicity of the problem's formulation, numerous issues arise when attempting to resolve it. Some of these issues are described in more detail in upcoming 

One crucial point to remember: In complex environments, it is impossible to achieve perfect rationality—always doing the right thing—as we will soon discover. The computational requirements are simply excessive. However, for the majority of the book, we will ad here to the working hypothesis that perfect rationality is a useful analytical starting point. It improvesthe issue and gives the fitting setting to a large portion of the central material in the field. 


In this way our team continue the ARTIFICIAL INTELLIGENCE (A Modern Approach) stay tuned