Computer Science – Concordia University Texas https://online.concordia.edu Austin | DFW | Houston | San Antonio Mon, 07 May 2018 15:11:09 +0000 en-US hourly 1 What Is Information Assurance and Security? https://online.concordia.edu/computer-science/what-is-information-assurance-and-security/ Mon, 18 Dec 2017 14:56:24 +0000 http://online.concordia.edu/?p=5238 There is a clear need for information assurance and security workers. The Bureau of Labor Statistics (BLS) projects that the employment of information security analysts will increase 28 percent by 2026, which is much faster than the average for all occupations. “Demand for information security analysts is expected to be very high,” the BLS explained.... Read more »

The post What Is Information Assurance and Security? appeared first on Concordia University Texas.

]]>

There is a clear need for information assurance and security workers. The Bureau of Labor Statistics (BLS) projects that the employment of information security analysts will increase 28 percent by 2026, which is much faster than the average for all occupations.

“Demand for information security analysts is expected to be very high,” the BLS explained. “Cyberattacks have grown in frequency, and analysts will be needed to come up with innovative solutions to prevent hackers from stealing critical information or creating problems for computer networks.”

The 2017 Equifax cybersecurity breach — one of the five biggest data breaches ever in reach and the kind of information exposed to the public — demonstrates what can happen in a security crisis. As many as 143 million Americans (nearly half the country) had their personal information compromised in the breach, according to CNN. Names, Social Security numbers, birth dates, addresses, numbers of driver’s licenses and credit card numbers were among the types of information exposed in the attack.

These types of attacks can damage customers, as well as organizations’ reputation, profits and assets. To guard against these threats, businesses need to take topics like information assurance and security seriously.

What Is Information Assurance and Security?

Information assurance and security are related but separate concepts. “The terms are inherently linked and share an ultimate goal of preserving the integrity of information,” according to data loss prevention software company Digital Guardian.

Clarifying the Terms

Information assurance is about protecting information assets from destruction, degradation, manipulation and exploitation by an opponent, according to Andrew Blyth and Gerald Kovacich in their book Information Assurance: Surviving in the Information Environment. They offer an additional definition from the United States Department of Defense (from 1996) to help clarify what information assurance means.

Actions taken that protect and defend information and information systems by ensuring their availability, integrity, authentication, confidentiality and non-repudiation. This includes providing for restoration of information systems by incorporating protection, detection and reaction capabilities.

Information security can be defined as the protection of information against unauthorized disclosure, transfer, modification or destruction, whether accidental or intentional.

Information security can be considered a sub-discipline or component of information assurance. Both concepts deal with intentional and unintentional attacks, but information assurance covers areas not covered by information security such as perception management. This level of information assurance deals with physical and technical measures to maintain an accurate, objective perception of the security state of the system and the information contained in the system.

Application

Information security offers many benefits for businesses, according to Digital Guardian.

  • Maintaining compliance with regulatory standards, preventing costly security events, maintaining the company’s reputation and preserving the trust of customers, suppliers, partners and shareholders.
  • Protection against fines issued by regulatory agencies or lawsuits from other companies and individuals, if the company fails to protect sensitive information and other companies or individuals suffer consequences in a breach.

Because information security is included within information assurance, the above benefits apply to information assurance. Additional benefits include data integrity, usability, non-repudiation, authenticity, confidentiality, availability and the reliable and timely access to information.

Information assurance is broad in nature. This field stresses organizational risk management and overall information quality. It’s a strategic initiative that incorporates a wide range of information protection and management processes. Examples include security audits, network architecture, compliance audits, database management and the development, implementation and enforcement of organizational information management policies.

Information security involves mitigating risks through secure systems and architecture, in an effort to eliminate or reduce vulnerabilities. The BLS listed some of the typical tasks that information security analysts perform.

  • Monitor networks for security breaches and investigate when violations occur.
  • Install software, such as firewalls and data encryption programs, to protect data.
  • Prepare reports to document security breaches and the resulting damage.
  • Conduct penetration that simulates attacks to locate system vulnerabilities.
  • Research the latest information technology (IT) security trends.
  • Develop security standards and best practices.
  • Recommend security enhancements to management or senior IT staff.
  • Help computer users when they need to install or learn about new security products and procedures.

Information Assurance and Security in Healthcare

Healthcare is an industry that’s particularly affected by information assurance and security issues.

The BLS noted in the job outlook for information security analysts that “as the healthcare industry expands its use of electronic medical records, ensuring patients’ privacy and protecting personal data are becoming more important. More information security analysts are likely to be needed to create the safeguards that will satisfy patients’ concerns.”

Hackers have found healthcare to be a lucrative target. “From organizations with exposed, unused websites to unencrypted storage drives, health organizations appear to still have much to learn about security,” according to Healthcare IT News. The article listed dozens of sizable healthcare breaches, such as the following examples.

  • The Augusta University Medical Center in Georgia was hit by successful phishing attacks twice within the past year.
  • A cyberattack on Medical Oncology Hematology Consultants in Delaware impacted 19,203 patients, targeting electronic files on the provider’s server and workstation.
  • More than 106,000 patients were notified by Mid-Michigan Physicians Imaging Center of a potential data breach of their personal health information.
  • Pacific Alliance Medical Center in Los Angeles was hit by a ransomware attack involving the health information of 266,123 patients.
  • Patient data for 1.1 million patients enrolled in Indiana’s Health Coverage Program was left open, giving access to name, Medicaid ID number, name and address of doctors treating patients, patient number, procedure codes, dates of service and the amount Medicaid paid doctors or providers.
  • Molina Healthcare, which insures 4.8 million patients in 12 states and Puerto Rico, shut down its client portal in response to a security flaw that exposed patient medical claims data without requiring authentication.
  • Cybercriminal organization TheDarkOverlord hacked the server and back-up drive of Cancer Services of East Central Indiana-Little Red Door. The organization stripped data, encrypted it and asked for a $43,000 ransom as well as making extortion threats.

Information Assurance and Security Salary

The BLS does not track salary data for information assurance workers.

Information security analysts earn a median annual wage of $92,600. In the top industries, they earn the following median salaries.

  • Finance and insurance: $94,050
  • Computer systems design and related services: $93,490
  • Information: $92,940
  • Administrative and support services: $92,890
  • Management of companies and enterprises: $87,510

Overall, the median annual wage for computer and information technology occupations is $82,860.

Advancing Your Career

Employment in information assurance or security positions typically requires at least a bachelor’s degree. For instance, according to the BLS, information security analysts usually need at least a bachelor’s in computer science, information assurance, programming or a related field.

With an online computer science degree from Concordia University Texas, you can learn the knowledge and skills needed to pursue a rewarding career in information assurance or security, along with other computer-related fields. Learn in a flexible, convenient online environment with a schedule that fits your life.

The post What Is Information Assurance and Security? appeared first on Concordia University Texas.

]]>
Software Development, Rapid Iteration and the RITE Method https://online.concordia.edu/computer-science/rapid-iteration-and-the-rite-method/ Tue, 12 Dec 2017 17:56:08 +0000 http://online.concordia.edu/?p=5234 In 2002, five Microsoft employees presented a research paper and case study that began by looking at usability research in commercial settings. The authors observed that the focus in usability is often about uncovering all the problems instead of the focus of these tests — “shipping an improved user interface as rapidly and cheaply as... Read more »

The post Software Development, Rapid Iteration and the RITE Method appeared first on Concordia University Texas.

]]>

In 2002, five Microsoft employees presented a research paper and case study that began by looking at usability research in commercial settings. The authors observed that the focus in usability is often about uncovering all the problems instead of the focus of these tests — “shipping an improved user interface as rapidly and cheaply as possible.”

Usability issues are sometimes not fixed. The authors gave four reasons to help explain why this occurs.

  1. Decision-makers for the product don’t think that the issues uncovered are “real” or worthy of a fix.
  2. Fixing problems takes time and resources. Adding features is often prioritized over fixing a “working” feature.
  3. Usability feedback comes too late. Delays can come after product feature decisions are made, making them more likely to be left out.
  4. Teams aren’t sure whether a proposed solution will fix the problem. Difficult or time-consuming fixes are undesirable if there’s uncertainty whether they’ll work.

 

Confronted with these problems, the authors presented a cost-effective and time-efficient alternative. The seminal work established and defined the RITE method, which is based in a process known as rapid iteration.

Rapid Iteration Process

Rapid iteration or prototyping is a process that is used often for software development. Smashing Magazine defines rapid prototyping as an iterative approach to user interface design that quickly mocks up the future state of a system, such as a website or application, and validates it with a broader team of users, stakeholders, developers and designers.

There are multiple iterations of a three-step process.

  1. Prototype: Convert the users’ description of the solution into mock-ups, factoring in user experience standards and best practices.
  2. Review: Share the prototype with users and evaluate whether it meets their needs and expectations.
  3. Refine: Based on feedback, identify areas that need to be refined or further defined and clarified.

Rationale

Why should developers make multiple iterations? There are several steps involved in improving a product, or more generally, developing a solution to a problem. “We need to understand the problem, gather requirements for a potential solution, translate those requirements into a design, build the solution, and test it,” according to IBM. “This order is fairly natural, and generally correct. Problems creep in, however, when we try to scale this up — that is, when we try to gather all requirements, then do all design, then all development, then all testing in a strictly linear fashion.”

Instead, the scientific approach should be integrated into software development. Just like theories are proposed and experiments are designed and performed to test those theories, so too can requirements for a product be tested to gauge whether they should be implemented and if the right solution is defined.

“This leads us to adopt a style of software development where the assertions inherent in the plan are repeatedly challenged and evaluated by the design and development of demonstrable versions of the system, each of which is objectively shown to reduce the project risk and build upon the other to form the finished solution,” IBM added.

Benefits

As a result, the process yields greater insight and accuracy to what changes are made. And this all happens much quicker, and thus, at less cost, to linear usability processes. Additional benefits exist too. Designers often struggle with clients who want proof that their design will succeed, according to author Roger Martin in Harvard Business Review. “They hit a ‘prove-it’ wall: their clients ask for evidence that the design will succeed. The more radical and bold the design, the bigger a problem this is for the frustrated designer.”

However, in rapid prototyping, each iterative test generates data that the client can see and gain confidence from. By being able to see positive reactions from users, the client can trust the design. Rapid iteration and prototyping produce real-world data that can help designers, developers and other professionals who are using this process.

 

Chart will yellow and purple circles representing the rapid iteration method.

The RITE Method

Rapid iteration or prototyping is embedded in the RITE method, which stands for Rapid Iterative Testing and Evaluation. The Microsoft employees who first defined and explored the method formally described it as “very similar” to a traditional usability test.

“The usability engineer and team must define a target population for testing, schedule participants to come in to the lab, decide on how the users’ behaviors will be measured, construct a test script and have participants engage in a verbal protocol (e.g. think aloud),” the authors wrote. “RITE differs from a ‘traditional’ usability test by emphasizing extremely rapid changes and verification of the effectiveness of these changes.”

The method involves implementing changes as soon as a problem is identified and a solution is clear. That can mean within the day or a period of two hours. After each participant, time must be set aside to review results with decision-makers and decide if issues raised warrant changes for the next iteration of the prototype.

Tips for Using the RITE Method

Here are some tips for getting started with the RITE method, according to UX Magazine.

  • Schedule 30 minutes after each session to discuss what people observed, form hypotheses about why users were confused or stuck on certain parts and come up with ideas for improvements.
  • Designate one person as the ultimate decision-maker on what changes should be implemented before the next participant.
  • Have someone on-site who is willing and able to make quick iterations.
  • Be willing to fail early and often. Making decisions quickly before the next participant can help in overcoming any lingering fears.

Pursuing a Career in Development

Employment of software developers is projected to increase by 24 percent by 2026, according to the Bureau of Labor Statistics, which is much faster than the average for all occupations. The median annual wage is $102,280.

With an online computer science degree from Concordia University Texas, you can learn the knowledge and skills needed to pursue a rewarding career in software development or another field. Learn in a flexible, convenient online environment with a schedule that fits your life.

 

The post Software Development, Rapid Iteration and the RITE Method appeared first on Concordia University Texas.

]]>
Data Encoding Techniques 101 https://online.concordia.edu/computer-science/data-encoding-techniques-101/ Thu, 09 Nov 2017 16:20:32 +0000 http://online.concordia.edu/?p=5199 “Information” refers to data that has been decoded, according to software company Micro Focus. It is the real-world, useful form of data. Data is technically a series of electrical charges arranged in patterns to represent information. Before data in an electronic file can be decoded and displayed on a computer screen, it must be encoded.... Read more »

The post Data Encoding Techniques 101 appeared first on Concordia University Texas.

]]>

“Information” refers to data that has been decoded, according to software company Micro Focus. It is the real-world, useful form of data.

Data is technically a series of electrical charges arranged in patterns to represent information. Before data in an electronic file can be decoded and displayed on a computer screen, it must be encoded.

What Is Data Encoding?

Computers use encoding schemes to store and retrieve meaningful information as data. This is known as data encoding.

On electronic devices like computers, data encoding involves certain coding schemes that are simply a series of electrical patterns representing each piece of information to be stored and retrieved. For instance, a series of electrical patterns represents the letter “A.” Data encoding and decoding occur through electronic signals, or the electric or electromagnetic encoding of data.

In data encoding, all data is serialized, or converted into a string of ones and zeros, which is transmitted over a communication medium like a phone line. “Serialization must be done in such a way that the computer receiving the data can convert the data back into its original format,” according to Microsoft. “How serialization is accomplished is called a communication protocol, and is controlled by both software and data-transmission hardware. There are several levels at which the data is converted.”

The application layer on the first computer sends the data to be transmitted to the encode/decode layer. It encodes the data into a stream of computer bytes, and the hardware layer coverts the bytes of data into a serial stream of ones and zeros that is transmitted over the line to the second computer. The hardware layer of the second computer converts the ones and zeros back into computer bytes, and then passes them to the encode/decode layer for decoding. The bytes are decoded back to their original format, and they are passed up to the application layer.

Data encoding requires an agreed-upon method that governs how data is sent, received and decoded. The method needs to address certain key questions, according to Micro Focus.

    • How does a sending computer indicate to which computer it is sending data?

 

    • If the data will be passed through intervening devices, how are these devices to understand how to handle the data so that it will get to the intended destination?

 

    • What if the sending and receiving computers use different data formats and data exchange conventions? How will data be translated to allow its exchange?

 

A communication model known as OSI was developed by the International Organization for Standardization to respond to these questions. OSI controls data transmission on computer networks. It is not a communication standard; it is a guideline for developing such standards. The OSI model can help explain how data can be transferred between two networked computers.

Illustration of a pyramid representing the seven layers of OSI encoding.

The Seven-Layer OSI Model

Controlling communications across a computer network is too complex to be defined by one subtask, so the OSI model is divided into seven subtasks. These subtasks correspond to the seven layers.

Layer 1: Physical

The lowest layer of the OSI model concerns the transmission and reception of the unstructured raw bit stream. The physical layer describes the electrical/optical, mechanical and functional interfaces to the physical medium and carries the signals for all of the higher layers. According to Microsoft, it also provides data encoding, physical medium attachment, transmission technique functions and physical medium transmission.

Layer 2: Data Link

The data link layer ensures reliability of the first layer, providing error-free transfer of data frames from one node to another. Various standards dictate how data frames are recognized; for instance, frame error checking will analyze received frames for integrity.

Layer 3: Network

This layer establishes, maintains and terminates network connections. Standards for the network layer, such as logical-physical address mapping and subnet traffic control, define how data routing and relaying are handled.

Layer 4: Transport

The transport layer insulates the upper layers — layers five through seven — by dealing with complexities of layers one through three. This layer varies, however. “The size and complexity of a transport protocol depends on the type of service it can get from the network layer,” according to Microsoft. “For a reliable network layer with virtual circuit capability, a minimal transport layer is required. If the network layer is unreliable and/or only supports datagrams, the transport protocol should include extensive error detection and recovery.”

Insulating the upper layers from complexities of the other layers is accomplished by providing functions necessary to guarantee a reliable network link. Examples include error recovery and flow control between the two end points of the network connection.

Layer 5: Session

The session layer oversees user connections and manages the interaction between different stations. Services include session establishment, maintenance and termination, as well as session support.

Layer 6: Presentation

The presentation layer formats the data that is presented to the application layer. These transformations provide a common interface for user applications. It acts as a sort of “translator” for the network, so the application layer can have data in a common format. The layer provides character code translation, data conversion, data compression and data encryption.

Layer 7: Application

The final layer serves as the window for users and application process to access network services. The application layer provides a wealth of services due to the potentially wide variety of applications involved. These can include remote file and printer access, resource sharing and device redirection, directory services and electronic messaging.

Pursuing a Career in Computer Science

Data encoding is a basic concept within computer science. With an online computer science degree from Concordia University Texas, you can learn the knowledge and skills needed to pursue a rewarding career in this field. Average starting salary projections for new computer science graduates were $78,199, according to figures from the National Association of Colleges and Employers. These graduates also have a high full-time employment rate (83.9 percent) within six months of graduating with a bachelor’s degree.

Learn in a flexible, convenient online environment with a schedule that fits your life.

The post Data Encoding Techniques 101 appeared first on Concordia University Texas.

]]>
What Is Neural Network Art? https://online.concordia.edu/computer-science/what-is-neural-network-art/ Thu, 07 Sep 2017 18:48:37 +0000 http://online.concordia.edu/?p=5169 In recent years, advances in artificial intelligence (AI) have given rise to a new phenomenon: neural networks. By designing AI to mimic the human brain at a basic level, computer scientists are able to create machines capable of “deep learning;” that is, machines that can learn basic concepts and apply those concepts in situations other... Read more »

The post What Is Neural Network Art? appeared first on Concordia University Texas.

]]>
Monet painting turned into four versions of neural network art.

Photo Credit: Image by Google Inc. under a Creative Commons Attribution 4.0 International License.

In recent years, advances in artificial intelligence (AI) have given rise to a new phenomenon: neural networks. By designing AI to mimic the human brain at a basic level, computer scientists are able to create machines capable of “deep learning;” that is, machines that can learn basic concepts and apply those concepts in situations other than the ones in which they learned them.

While neural networks are used for a variety of purposes, one of the more fascinating applications is neural network art: visual art, music and other forms of creative expression designed by algorithms.

Neural Network Art

Neural networks make use of “deep learning,” a concept that uses our limited understanding of how the human brain works to create networks of algorithms that can “learn” from supplied data. These networks of algorithms, called neural networks, are arranged in a way that imitates the way neurons in the brain connect to each other. Layers of algorithms are able to process increasingly complex data, and their connections to each other further increase that capability. This method of creating AI generates machines that can learn from data inputs and even make guesses based on that data.

Scientific American describes how a neural network can learn to recognize faces:

“To recognize a face, the network sets about the task of analyzing the individual pixels of an image presented to it at the input layer. Then, at the next layer, it chooses geometric shapes distinctive to a particular face. Moving up the hierarchy, a middle layer detects eyes, a mouth and other features before a composite full-face image is discerned at a higher layer. At the output layer, the network makes a ‘guess’ about whether the face is that of Joe or rather that of Chris or Lee.”

But a neural network’s ability extends beyond simply processing inputs and making guesses about them. Neural networks can create outputs based on the data they’re given, a capability that has recently been used to create audio and visual art.

The Emergence of AI Art

Neural networks are being used by artists to enhance or supplement their artistic efforts or to create entirely new pieces of art. Digital art has been around long enough to be acknowledged as its own medium, but Adobe’s Wetbrush technology is blurring the line between digital art and painting. In the past, creating an effect in a digital medium to mimic the look of paint has proved difficult, but Wetbrush uses algorithms and a physics simulation system to create digital brush strokes that appear to have the texture and color of oil paint. Pictures created using Wetbrush can even be printed in 3-D to bring out the natural lighting effects of an oil painting.

Bhautik Joshi, a researcher at Adobe, even applied neural network art to an existing film. Joshi used neural networks to create an effect similar to Pablo Picasso’s cubist style in Stanley Kubrick’s 2001: A Space Odyssey, creating a multi-colored, kaleidoscopic reimagining of the science fiction classic.

Neural networks can be applied to music, too. Flow Machines, a research project coordinated by Sony, was created to “research and develop Artificial Intelligence systems able to generate music autonomously or in collaboration with human artists.” By feeding Flow Machines data, such as compositions by artists ranging from Bach to the Beatles, the neural network is able to create music, including a pop song called “Daddy’s Car,” a collaboration with French songwriter Benoît Carré.

Computer scientists are even experimenting with using neural networks to make video games. Angelina, developed by British researcher Michael Cook, creates video games by scanning online newspaper articles for themes and incorporates those themes with visuals taken from the web. While these games lack the complex narrative structures of modern video games, Angelina is able to create games with goals and activities for players to complete.

Examples of Google Deep Dream image generator transferring natural textures onto objects: a horizon onto a pagoda, trees onto buildings, and leaves onto birds.

Photo Credit: Image by Google Inc. under a Creative Commons Attribution 4.0 International License.

Google’s Deep Dream

Perhaps the most famous example of neural network art is that produced by Google’s Deep Dream Generator. A complex network working with countless pieces of visual data, Deep Dream is an open source neural network art project that any internet user can interact with, feed images to and receive those images back, reinterpreted by Deep Dream.

One of the things that research with Deep Dream illustrates is that the processes that happen at different layers of a neural network are still largely mysterious to computer scientists. While researchers know that each subsequent layer processes increasingly complex and sophisticated types of information, the data that is specifically isolated by each layer is unclear. Google researchers work with Deep Dream, producing results using specific layers of the network in order to learn more about how Deep Dream interprets and produces images.

Deep Dream was — and continues to be — trained with many visual examples of various objects, like animals, fruit, trees and people. Deep Dream is capable of creating new art based on images it’s given, but that art often adheres to themes based on preconceptions Deep Dream develops over time. “For example, horizon lines tend to get filled with towers and pagodas,” according to Google. “Rocks and trees turn into buildings. Birds and insects appear in images of leaves.”

Many of Deep Dream’s pictures are created using a technique called “inceptionism,” where multiple different images are combined within the program to create something of an amalgam of the source images. But Deep Dream doesn’t need images of specific objects to create art; it is capable of producing pictures based on random noise images that do not contain any discernable shapes.

Three version of the famous Mona Lisa painting altered to resemble the styles of Picasso, Van Gogh, and Monet by artist Gene Kogan using Deep Dream techniques.

Photo Credit: Mona Lisa recreated in the styles of Picasso, Van Gogh, and Monet by artist Gene Kogan using style transfer.

And, the more images it obtains, the more sophisticated — and unique — its process becomes. From collaborations with artist Gene Kogan to a two-day exhibit that raised around $84,000, Deep Dream’s art has become increasingly prominent.

Other Neural Network Art

Many computer scientists are exploring what can be done artistically with neural networks in an effort to better understand how they work, how they process image data and how they make decisions. From projects experimenting with inceptionism to work done by organizations like the Visual Geometry Group, neural network art research is a growing field.

As the field grows and researchers learn more about how neural networks work, the need to share information and experiment collaboratively becomes more important, in order to better understand neural network art.

Pursuing a Career in Computer Science

Neural network art wouldn’t be possible without skilled computer scientists developing the code that allows deep learning to function. With an online computer science degree from Concordia University Texas, you can apply your computer science skills to research fields like neural networks. Learn in a flexible environment on a schedule that fits into your life.

The post What Is Neural Network Art? appeared first on Concordia University Texas.

]]>
Video Game Psychology: Addiction vs. Achievement https://online.concordia.edu/computer-science/video-game-psychology-addiction-vs-achievement/ https://online.concordia.edu/computer-science/video-game-psychology-addiction-vs-achievement/#comments Wed, 07 Jun 2017 12:39:13 +0000 http://online.concordia.edu/?p=4980 As an omnipresent part of modern culture, video games hold a fascination for both researchers and laypeople alike. One particular area of interest lies in their psychological impact on players. The concepts of addiction and achievement have been associated with video game play, leading researchers to wonder about the connection. Due to their unique features,... Read more »

The post Video Game Psychology: Addiction vs. Achievement appeared first on Concordia University Texas.

]]>

As an omnipresent part of modern culture, video games hold a fascination for both researchers and laypeople alike. One particular area of interest lies in their psychological impact on players. The concepts of addiction and achievement have been associated with video game play, leading researchers to wonder about the connection. Due to their unique features, video games can be seen as a catalyst for both.

Structural Features of Video Games

Video games contain numerous features that may contribute to either addiction or achievement.

Social Features

Social features refer to aspects of video games that encourage socialization, cooperation and competition among gamers, explains the International Journal of Mental Health and Addiction. Examples include the institutions of guilds in the game “World of Warcraft,” gesture-based communications in the game “Counter-Strike” or the actual real-world capacity to socialize with others while playing video games.

These features:

  • Give players a platform to engage in social interaction before, during and after game play
  • Help players satisfy the fundamental human need to be part of a social group
  • Provide support networks, which may be used for various purposes including sharing game knowledge and appreciation and recognition from other players

Narrative and Identity Features

Narrative and identity features enable players to take on identities in the game that are not their own, either as fictional characters or gamified constructions of their own personalities. Furthermore, incorporating storytelling elements may help players more fully immerse themselves in game play. Examples include avatar creation in massively multiplayer online role-playing games (MMORPGs) or mystery-solving games such as “Gone Home” or “Her Story.”

Narrative and identity features:

  • Enable players to form attachments to their game characters and emotionally invest in those characters’ stories
  • Offer simple or complex storytelling, providing a sense of escape or catharsis
  • Demonstrate thematic or genre features, such as “first-person shooting” games, “role-playing” games, “strategy” games or “puzzle” games

Reward and Punishment Features

Reward and punishment features are designed to reinforce skillful play and penalize players who perform poorly.

Reward features:

  • Provide positive reinforcement for players
  • Can be delivered randomly, intermittently or on a fixed schedule; can also be “unlocked” when a player does something particularly special within the game

Punishment features:

  • Help establish contextual worth of rewards
  • Reinforce the idea that a game is based on skill

The psychology of video games remains fascinating, as these features have been found to contribute to both addiction and achievement in research subjects.

 

The Link Between Video Games and Addiction

The popularization of video games has brought with it a slew of problems, according to Daria J. Kuss in Psychology Research and Behavior Management.

Video game psychology can lead to an addictive lifestyle. Because the social features of video games have the capacity to fulfill the human need for social interaction, players can require continuous play (much as alcoholics require alcohol) to continue fulfilling this requirement. In some games such as “World of Warcraft,” expectations for game success can mirror the sorts of social expectations found in life. Thus, game play can be used as a compensation mechanism for individuals who lack success and rewarding relationships in the real world.

Furthermore, the ability to create avatars or take on character identities can entice some individuals into placing disproportionate emotional emphasis on events or figures that happen in-game.

Kuss finds reward systems particularly problematic. Studies, she explains, have shown that areas of the brain relevant to addiction can change when subjected to reward or motivation systems in video games, resulting in the release of dopamine. Over time, this process can lead to synaptic changes in the brain, causing long-term psychological problems.

Consequences of video game addiction may include:

  • Loneliness
  • Stress
  • Dysfunctional coping
  • Poor academic achievement
  • Sleep problems

With this said, it must be noted that not all video games offer similar detriments. MMORPGs are more likely to become addictive than other sorts of video games because of the abundant presence of the features listed above.

The Link Between Video Games and Achievement

While video games can certainly prove detrimental, they have their benefits as well.

In the academic article “The Benefits of Playing Video Games,” three authors discuss the psychology of video games by highlighting a number of benefits that may help players generate feelings of achievement or cultivate actual successes in their lives.

  • Social features in video games (particularly those hosted online) may enable gamers to attain social successes that would be otherwise difficult, such as overcoming social phobia or physical immobility.
  • Narratives help humans to conceptually frame the world around them. Narratives in video games, therefore, may offer the same opportunities to generate empathy and learn problem solving that any other storytelling medium might.
  • Because adaption to circumstances is rewarded within game play, playing video games may help promote players’ ability to effectively handle their own emotions.
  • Score systems can offer tangible tools for self-assessment and comparison.
  • Feedback messages and reward systems can provide players with encouragement and gratification, inspiring them to improve and learn continuously.
  • Success in certain areas of life (such as video gaming) can help trigger players’ desires for achievement in the real world as well.

Finally, there are certain video games specifically designed to help game players achieve their goals in the real world. Many of these are related to gamification in healthcare, such as the exercise system Wii Fit or the game “Re-Mission,” which is designed to help cancer patients better understand and manage their disease.

Ultimately, whether video games are beneficial or detrimental seems to depend more on the context of their use than on the games themselves.

Additional Sources: Game Reward Systems: Gaming Experiences and Social Meanings, “Narrative and Gameplay in Game Design” 

Computer Science and You

For those who want to understand the future of this fascinating form of entertainment, the online computer science degree from Concordia University Texas can provide them with the skills they need for success. Concordia also offers an online psychology degree for those interested in the connection between human beings and their technology. Programs are offered fully online, providing the ultimate flexibility when earning a degree.

The post Video Game Psychology: Addiction vs. Achievement appeared first on Concordia University Texas.

]]>
https://online.concordia.edu/computer-science/video-game-psychology-addiction-vs-achievement/feed/ 2
Exploring the Pros and Cons of Video Gaming https://online.concordia.edu/computer-science/pros-and-cons-of-video-gaming/ https://online.concordia.edu/computer-science/pros-and-cons-of-video-gaming/#comments Tue, 23 May 2017 17:48:56 +0000 http://online.concordia.edu/?p=4956 Video games are a trademark commodity in our modern, technologically driven culture. Although undoubtedly entertaining, considerable debate remains as to their relative positive or negative impacts on individuals and society. Even though studies regarding this issue are relatively new, they can still provide insight into the benefits and drawbacks of this popular pastime. Positive Effects... Read more »

The post Exploring the Pros and Cons of Video Gaming appeared first on Concordia University Texas.

]]>

Video games are a trademark commodity in our modern, technologically driven culture. Although undoubtedly entertaining, considerable debate remains as to their relative positive or negative impacts on individuals and society. Even though studies regarding this issue are relatively new, they can still provide insight into the benefits and drawbacks of this popular pastime.

Positive Effects of Video Games

Numerous studies have examined the pros and cons of video gaming. Here are just a few that have found video games provide distinct advantages to their players.

Video games improve basic visual processes

According to Psychology Today, playing video games has been shown to increase players’ ability to distinguish subtle differences in shades of gray, a phenomenon known as “visual contrast sensitivity.” They may also improve the eyesight of the visually impaired and help players increase their ability to visually detect the direction of movement.

Video games can enhance executive functioning

“Executive functioning” is the term used for a person’s ability to rapidly and efficiently solve problems. Video games can help improve multitasking, increase mental flexibility and even reverse the mental decline that occurs as people age.

Video games can improve everyday skills

Playing video games has been found to enhance hand-eye coordination, lengthen attention spans and improve both working memory and rapid decision-making abilities.

Video games may help ease anxiety and depression

Both anecdotally and scientifically, video games have been shown to reduce symptoms of anxiety and depression. For example, Scientific American reported that the game Tetris may actually ease the symptoms of post-traumatic stress disorder. In forums such as Geek and Sundry, authors describe how video games may help those who have social anxiety disorder learn how to initiate relationships and learn about social cues.

Negative Effects of Video Games

While video games have been touted for their advantages, evidence also suggests that playing them may be detrimental.

Video games can make people more violent

According to The Telegraph, researchers have found a direct link between violent video games and an increase in aggressive behavior. This applies particularly to “shoot-em-up” games that simulate firearms.

Video games may decrease players’ ability to concentrate

A study published in Psychology of Popular Media Culture found a correlation between the length of time individuals play video games and their ability to remain focused. The study also suggested that playing video games may exacerbate the impulsiveness of individuals who already have this inclination.

Video games can become addictive

A university study found that one in 10 youth gamers is “addicted;” their playing habits cause family, social, school or psychological damage. Treatment programs combating video game addiction have cropped up across the world, including in the United States, South Korea and the Netherlands.

Video games may increase depression and anxiety

While it’s true that video games can help combat anxiety and depression, other studies have shown that they might cause or exacerbate these conditions instead. A study in Cyberpsychology, Behavior and Social Networking, for example, found that fifth-graders who play video games two or more hours a day are more likely to have symptoms of depression than those who play less.

While the arguments on both sides of this debate continue, conclusions as to whether video games are ultimately “good” or “bad” lie with the reader. The truth is most likely somewhere in between, with allowances made for personal circumstances and preferences.

Digital Culture, Real World

Video games are just a small part of the fascinating and developing world of modern digital culture. For those who want to increase their understanding of 21st-century technology, the online computer science degree from Concordia University Texas can provide an educational foundation for success. The program equips students with the skills they need to master current technology and prepare for future developments in the field.

The post Exploring the Pros and Cons of Video Gaming appeared first on Concordia University Texas.

]]>
https://online.concordia.edu/computer-science/pros-and-cons-of-video-gaming/feed/ 147
5 System Development Life Cycle Phases https://online.concordia.edu/computer-science/system-development-life-cycle-phases/ Fri, 17 Mar 2017 14:42:57 +0000 http://online.concordia.edu/?p=4914 “A good system shortens the road to the goal.” –Orison Swett Marden, author and founder of Success magazine No field stresses the importance of a well-built system quite like computer science. Effective computer systems ensure a logical workflow, increase general efficiency and make it easier for companies to deliver high-quality products to their clients. The System... Read more »

The post 5 System Development Life Cycle Phases appeared first on Concordia University Texas.

]]>

“A good system shortens the road to the goal.” –Orison Swett Marden, author and founder of Success magazine

No field stresses the importance of a well-built system quite like computer science. Effective computer systems ensure a logical workflow, increase general efficiency and make it easier for companies to deliver high-quality products to their clients. The System Development Life Cycle is a process that involves conceptualizing, building, implementing, and improving hardware, software, or both. The System Development Life Cycle must take into consideration both the end user requirements and security concerns throughout all its phases. From banking to gaming, transportation to healthcare, the System Development Life Cycle is applicable to any field that requires computerized systems.

5 Phases of the System Development Life Cycle

Although there are numerous versions and interpretations of the System Development Life Cycle, below are five of the most commonly agreed upon phases and their characteristics. It is important to note that maintaining strong communication with end user clients is crucial throughout the entire process.

1.) Requirements Analysis/Initiation Phase

In this first phase, problems are identified and a plan is created. Elements of this phase include:

  • Defining the objectives of the project, as well as end user expectations and requirements
  • Identifying available resources, such as personnel and finances
  • Communicating with clients, suppliers, consultants and employees to discover alternative solutions to the problem at hand
  • Performing system and feasibility studies
  • Studying how to make a product better than those of competitors

 
Security considerations:

  • Finding key areas where security is necessary in the system
  • Evaluating information for security requirements
  • Ensuring that everyone involved in the project has a common understanding of the security considerations
  • Identifying the information system security officer, an individual responsible for overseeing all security considerations

The importance of the Requirements Analysis/Initiation phase cannot be overemphasized. Thorough planning saves time, money and resources, and it ensures that the rest of the System Development Life Cycle continues as smoothly as possible.

2.) Development/Acquisition Phase

Once developers reach an understanding of the end user’s requirements, the actual product must be developed. The Development/Acquisition phase can be considered one of conceptual design and may involve:

  • Defining the elements required for the system
  • Considering numerous components such as security level, modules, architecture, interfaces and types of data that the system will support
  • Identifying and evaluating design alternatives
  • Developing and delivering design specifications

 
Security considerations:

  • Conducting risk assessments
  • Testing for system function and security
  • Preparing initial documents for system certification and accreditation
  • Designing security architecture
  • Developing security plans

3.) Implementation Phase

In this phase, physical design of the system takes place. The Implementation phase is broad, encompassing efforts by both designers and end users. Elements include:

  • Writing code
  • Physical construction of the system
  • Designing numerous items including output, input, databases, programs, procedures and controls
  • Installing hardware and software
  • Testing the system
  • Potentially converting between old and new systems, depending on the project
  • Training personnel on how to use the system
  • Fine-tuning various systemic elements to work out all remaining issues

 
Security considerations:

  • Designers must ensure that all system elements meet security specifications and do not conflict with or invalidate any existing controls
  • All security features are configured and enabled
  • Functionality of all security features is tested
  • Relevant personnel obtain formal authorization to implement these systems

This phase may also include testing and integration, or the process of ensuring that the entire system successfully works together as a single entity. Testing can be done by real users, trained personnel or automated systems; it is becoming an increasingly important process for purposes of customer satisfaction. Depending on the system in question, the Implementation phase may take a considerable amount of time.

4.) Operations Maintenance Phase

Once a system is delivered and goes live, it requires continual monitoring and updating to ensure it remains relevant and useful. Requirements of this phase may include:

  • Periodically replacing old hardware
  • Regularly evaluating system performance
  • Providing updates for certain components to ensure they meet standards
  • Delivering improved systems when necessary
  • Analyzing whether or not certain elements remain feasible for the system’s continued use, such as its economic value, technical aspects, legal requirements, and scheduling and operation needs

 
Security considerations:

  • Continual monitoring of system to ensure that it remains consistent with the client’s established security requirements
  • Security system modifications are incorporated when needed
  • Configuration management activities are conducted to ensure consistency of the program
  • Documenting any changes in the system and assessing their potential impacts

The Operations Maintenance phase continues indefinitely until a new problem is discovered. If it is determined that the system must be completely rebuilt, the Disposal phase begins.

5.) Disposal Phase

This phase represents the end of the cycle, when the system in question is no longer useful, needed or relevant. In this phase:

  • Plans are created to discard system information hardware and software
  • Arrangements are made to transition to a new system
  • Information may be moved to the new system as well, or otherwise discarded, destroyed or archived
  • For archived information, consideration is given to methods of future retrieval

 
Security considerations:

Security considerations are aligned with the above actions. It must be noted that executing the Disposal phase correctly is crucial, as major errors at this juncture put companies at risk of exposing sensitive data.

After the Disposal phase, the System Development Life Cycle begins again. The cycle usually has no conclusive end. Instead, systems generally evolve to match improvements in technology or to meet changing needs.

Sources: National Institute of Standards and Technology, Airbrake, Accounting Information Systems 9th Edition

Your Career in Computer Science

For anyone considering a job in computer science, the System Development Life Cycle is just one of many essential concepts required to know for a successful career. Programs like the online BA in Computer Science from Concordia University Texas offer the opportunity for students to gain a solid educational foundation, paving their way to achieve in the industries of the 21st century. The program offers rolling enrollment, allowing students to begin their education at a time that works for them.

 

The post 5 System Development Life Cycle Phases appeared first on Concordia University Texas.

]]>
Computer Science Algorithm Examples https://online.concordia.edu/computer-science/computer-science-algorithm-examples/ Mon, 16 Jan 2017 12:33:00 +0000 http://online.concordia.edu/?p=4833 Algorithms are essential building blocks in the practice of computer science. As written instructions that help computers operate, they ensure the accomplishment of particular functions, as well as the speed and total workability of software systems. While computer systems use many algorithms, a few in particular are notable for their significant impact. Notable Computer Science... Read more »

The post Computer Science Algorithm Examples appeared first on Concordia University Texas.

]]>

Algorithms are essential building blocks in the practice of computer science. As written instructions that help computers operate, they ensure the accomplishment of particular functions, as well as the speed and total workability of software systems. While computer systems use many algorithms, a few in particular are notable for their significant impact.

Notable Computer Science Algorithms

The following algorithms have been particularly influential to today’s advancements in computer science.

Graph representing the Fourier Transform computer algorithm.

Fourier Transform

The Fourier Transform was discovered by Jean Baptiste Joseph Fourier, a French mathematician who lived between 1768 and 1830, reports Professor John Daugman of the University of Cambridge Computer Laboratory. Fourier’s algorithm provides an alternative way to represent periodic or cyclic phenomena. It states the idea that it is possible for any space or time varying data to be transformed into frequency space. In other words, it allows scientists to take a time-based pattern, measure every possible cycle within that pattern and return information about how the pattern functions.

In addition to computer science, the significance of the Fourier Transform can be seen in physics, engineering and computational mathematics. The Fourier Transform allows data to be manipulated quickly, making for a better understanding of concepts such as sound, water and radio waves, the analyzation of linear systems and understanding complex data manipulations such as those relating to atomic structures and economic data. The Fourier Transform algorithm is a key tool in computer science for circuit design, signal processing and communications. It makes possible such technology as internet, Wi-Fi routers and satellites.

Diagram Illustration representing Dijkstra's computer algorithm.

Dijkstra’s Algorithm

Edsger Wybe Dijkstra was a Dutch computer scientist who lived between 1930 and 2002. Dijkstra’s algorithm provides a way to find the shortest distance between a single node on a graph and all other available vertices.

The concept of finding the shortest path is a problem-solving model that is widely used both in computer science and beyond, Princeton University explains. Its main benefit lies in the ability to provide increased efficiency within systems. Applications include programs such as Google Maps, urban traffic planning, robot navigation and the routing of telecommunication messages.

Illustration of encryption keys representing the RSA computer algorithm.

RSA Algorithm

The RSA algorithm is named after its three creators: Ron Rivest, Adi Shamir and Leonard Adleman. Introduced in 1978, it is a cryptographic algorithm designed to make electronic transmissions more secure, explains Evgeny Milanov of the University of Washington. As a response to the projected rise in email use, it introduced the following concepts:

  • Public key encryption. Before the RSA algorithm, “couriers” were needed in order for messages to be received across secure networks. The RSA algorithm made it possible for every sender and receiver of information to have their own encryption and decryption keys.
  • Digital signatures. Receivers of messages are required to verify that a transmitted message actually originated from the sender (called a “signature”) and didn’t just come from their device (called “authentication”). The algorithm is written in such a way that the signature cannot be forged, and no signer can later deny having signed the message.

Both of these concepts provide enhanced protection over electronic transmissions including email, fund transfers and other secure transactions.

Ultimately, these three algorithms have provided us with key tools to create computer technology as we know it today.

Using Algorithms in Computer Science Careers

While algorithms have proved useful to mathematicians for centuries, they are now occupying a crucial place in the advancement of computer science. Those who want to pursue this dynamic area of study can begin by earning their online computer science degree from Concordia University Texas. Students learn to work with today’s technology and adapt in the future, ensuring they are well-prepared for this field.

The post Computer Science Algorithm Examples appeared first on Concordia University Texas.

]]>
Competing in the Mobile Market: iOS vs. Android Development https://online.concordia.edu/computer-science/ios-vs-android-development/ Mon, 02 Jan 2017 17:26:40 +0000 http://online.concordia.edu/?p=4831 Now, it seems, is an opportune time to join the app development market. According to the Bureau of Labor Statistics, software developer positions are projected to increase 17 percent by 2024. App developers alone earn a mean annual wage of more than $102,000. However, successfully competing in the mobile market is an endeavor that requires... Read more »

The post Competing in the Mobile Market: iOS vs. Android Development appeared first on Concordia University Texas.

]]>

Now, it seems, is an opportune time to join the app development market. According to the Bureau of Labor Statistics, software developer positions are projected to increase 17 percent by 2024. App developers alone earn a mean annual wage of more than $102,000. However, successfully competing in the mobile market is an endeavor that requires a significant understanding of market conditions, development tools and the chances of making a profit. A key element in evaluating these factors is choosing the right operating system, iOS vs. Android, for development.

The Mobile App Market

Together, the Android and iOS operating systems currently make up around 99 percent of the smartphone market, reports Statista.com. Thus, the mobile app market is growing exponentially. By 2020, it is expected to reach a gross annual revenue of $101 billion, according to the market research firm App Annie. The firm predicts that the impact will be global. Not only will mature markets continue to see growth, but emerging markets will grow, particularly in countries such as India, Turkey, Indonesia, Argentina and Brazil.

It is important for developers seeking to gain footing in the mobile app market to understand the kind of apps that are in high demand. According to Jana.com, the two most in-demand types are utility apps (such as GPS systems or calculators) and those used for communication (Skype, Google Hangouts, etc.). While games are also in demand, their popularity varies depending on the geographical location of users. Apps that are less in demand but still notable include those for shopping, music and audio, and news and magazines.

App developers must understand where their products are most likely to be sold. Nearly 85 percent of all smartphones sold to end users are Android, Statista.com reports. While this seems to imply that Android is a better platform for developing apps, other factors such as earning potential and personal familiarization with development tools must also be considered for iOS vs. Android development.

Development Tools

To successfully build on either platform, developers must understand the tools and processes required to do so. Below are elements to consider for iOS vs. Android.

iOS

iOS is the operating system associated with Apple, the company responsible for creating iPhones, iPods, MacBooks and other products. According to Android Authority, keep the following in mind when developing within iOS:

  • iOS can be coded in multiple languages, although they are specific to the operating system. These languages include Swift (specifically designed by Apple) and Objective -C. (For developers already familiar with C and C++, this can prove an easy language to learn.)
  • iOS uses XCode as its development environment.
  • The iOS system only runs on Apple devices. This means that the system is considered “closed” and does not have cross-functionality among the majority of app-supporting devices. This does, however, make it easier to fix coding problems when they arise.
  • Developing on iOS generally has higher startup costs. Since it is a closed system, iOS can only be developed on Apple products, which tend to be expensive.
  • Design elements tend to be more stringent. They require an overall minimalist look, a navigation system routed through a tab bar and thinner lines.

Android

Android is an operating system developed by Google. It can be found in a substantial variety of devices including tablets, smartphones, gaming systems and automobile computer systems, to name a few. Development considerations for Android systems include: 

  • Android is available on a wide range of devices. (Android can be found on more than 5,000 different devices, according to Upwork.com). Since the manufacturers, screen sizes and features all vary substantially, apps made with Android require significantly more testing in the development process.
  • Android can be coded in multiple languages. Java is the most common. Others include HTML, CSS and Corona.
  • Since Android is an open platform, it allows developers increased precision and decision-making. Tools such as Android Studio are also available to help with the development process.
  • Outputs from the code tend to be complex, which can prove difficult for beginning coders.
  • Android is compatible with most major operating systems and can be developed on machines that run Windows, Linux or Mac.

 
Which system to choose? A common solution is to simply learn both. Fluency in both systems is extremely helpful for maximizing an individual’s programming abilities and potential for success.

Costs, Publishing and Profitability

Major differences with iOS vs. Android development continue into the realm of accessing the marketplace and potential for profit. Considerations include:

iOS

  • Developers are required to subscribe to Apple’s membership program in order to publish their apps in the Apple store. The fee for an individual developer is around $100 per year.
  • Apple is strict about the products it wants for the Apple store. This means Apple checks apps far more stringently and has a higher rejection rate than Android.
  • Apps from the Apple store tend to generate far more revenue than those on Android.

Android

  • Although Android does not charge a membership or subscription fee, Google Play does charge a $25 one-time registration fee to developers in order to upload their apps.
  • Publishing is easier than on iOS, only requiring an upload.
  • Profits from using this platform tend to be lower.

 
Overall, there is no one right answer for app developers when deciding which system is better to pursue. Developers must weigh their current programming skills and knowledge, their preferences and their willingness to put forth the effort required to succeed.

Becoming an App Developer

For those seeking to join the app development market, the decision whether to choose iOS vs. Android can prove crucial to their success. Aspiring app developers can begin by earning an online computer science degree from Concordia University Texas. The program teaches students real-world skills that can be immediately applied to their careers.

The post Competing in the Mobile Market: iOS vs. Android Development appeared first on Concordia University Texas.

]]>
Leveling Up: Video Game Development by Language https://online.concordia.edu/computer-science/video-game-development/ https://online.concordia.edu/computer-science/video-game-development/#comments Thu, 29 Sep 2016 17:54:04 +0000 http://online.concordia.edu/?p=4756 Video games are a big business. Total revenue for the U.S. video game industry reached $23.5 billion last year, a 5 percent increase from 2014. Behind every video game are programmers who help develop the product. Although programming languages vary from game to game, a few are the most popular. This post will cover the... Read more »

The post Leveling Up: Video Game Development by Language appeared first on Concordia University Texas.

]]>
Video games are a big business. Total revenue for the U.S. video game industry reached $23.5 billion last year, a 5 percent increase from 2014. Behind every video game are programmers who help develop the product. Although programming languages vary from game to game, a few are the most popular.

This post will cover the common video game development programming languages, or you can click the image below to view the interactive.

Although programming languages vary by game, a few are the most popular. This interactive infographic looks at the key languages powering video game development.

 

Here’s a look at the languages powering video game development.

Assembly

First appearing in 1949, assembly was used for decades. It was the primary language for home computers in the 1980s including the Commodore 64, Commodore Amiga and Atari ST. Many Sega Genesis and Super Nintendo games were written with forms of assembly. But in the 21st century, the use of assembly in video game development has mostly dried up.

C

Dennis Ritchie of Bell Labs developed C for the UNIX operating system in 1972. It remains one of the most popular programming languages because of its general simplicity and its strong structure. Because of C’s stability, it became a favorite of some video game designers. A PC gaming company that released several landmark games in the 1990s, id Software, used the C language for Doom and Quake. C is less common today, but there are plenty of developers who still work in C.

C++

A decade after C was unveiled, Bjarne Stroustrup created a follow-up language he called C++. The language is like programming shorthand. Stroustrup wanted to build a programming language that had the portability of C but was object-oriented, allowing programs to easily recognize the differences between objects. C++ is the language for many of today’s operating systems, software, games and game engines. There are few restrictions on C++ based on platform, so it makes games easily portable from PCs to consoles or vice versa. Some notable games using C++ include Counter Strike, World of Warcraft and Diablo.

C#

Developed by Microsoft in 2000, C# was designed as an alternative to C++ for better integration with Microsoft’s platforms. It differs from C++ because it automatically manages a computer’s memory in an efficient fashion. C# is part of Microsoft’s Visual Studio suite of programming products, making it easy to access. C# is quite popular among game developers, particularly independent ones. The Unity engine — a widely used game engine for PCs, consoles, mobile devices and more — is written primarily in C#. Notable games include Angry Birds, Hearthstone and Bastion.

Java

Java is in several ways a cousin to C#. They both have garbage collectors and are object-oriented languages. But Java is considered platform independent, meaning that it runs the same way on all platforms that support it. That isn’t always the case with C#. However, Java hasn’t been nearly as popular in video game development because of the Visual Studio tools available with C#. There aren’t many game engines written in Java, so it’s hard for independent developers to start from scratch. A few Java success stories are indie hits RuneScape and Minecraft.

Programming Your Future

Video game development is just one facet of the rapidly expanding computer science industry. Learn about the online Bachelor of Arts in Computer Science degree from Concordia University Texas.

The post Leveling Up: Video Game Development by Language appeared first on Concordia University Texas.

]]>
https://online.concordia.edu/computer-science/video-game-development/feed/ 1