Papers

Paper Titles

  • Paper 04 – Titled: An Analysis of Current Methods in Information Management Segmentation 
  • Paper 05 – Titled: Framing the Domain Problem - Contextual Influences on Content Interaction 
  • Paper 06 – Titled: A Critical Analysis on Identifying Effective Information Behaviours 
  • Paper 07 – Titled: The Role and Effectiveness of Using Bayesian Statistics in Software Engineering Quality
  • Paper 08 – Titled: How to Engage the Strategic Mind
  • Paper 09 – Titled: Flow Analysis
  • Paper 10 – Titled: Architecting Smarter Services with Big Data Analytics
  • Paper 11 – Titled: From Complexity to Using Big Data to Understand Crime: Data Enhanced Predictive Modelling for Targeting Crime Analysis
  • Paper 12 – Titled: Identifying bundles of crimes using big data – Accelerating value and innovation
  • Paper 13 – Titled: How Smart is your Operations with Big Data? Smarter Operations with Big Data
  • Paper 14 – Titled: A Framework for Visual Decision-Making Modeling Intelligence Behaviors
  • Paper 15 – Titled: Development of Ontology-Based for Turning Dynamic Data into Great Healthcare Strategy, Productivity & Performance
  • Paper 16 – Titled: Managing & Modelling Uncertainty through Visual Decision Making Strategy Behaviour
  • Paper 17 – Titled: Innovating with People Analytics for Operational Intelligence
  • Paper 18 – Titled: Impact of Digitalisation Techniques to Transform Decision Making Towards Transcription Mistakes Improving Medicines Management 

The Role and Effectiveness of Using Bayesian Statistics in Software Engineering Quality

Abstract
Bayesian Statistics provide a valuable way of analysing data. The paper addresses three key aspects of software design for an organisation – uncertainty, organisation behaviour and relationships. This paper investigates and presents the role and effectiveness of using Bayesian Statistics techniques for considering the situation of quantitative evaluation over-engineering a new software design. Two of the many factors affecting its decision are the proportion of users for which the software system will prove to be effective (O1) and the software's efficiency (O2). 

Both O1 and O2 will generally be unknown; however, statistical information can be obtained as part of characteristics deterministic about them. The need for and importance of foundations and critical theoretical concepts of Bayesian Statistics is explained and employed to assist an empirical case study approach. The attempt is to combine observed data with other relevant aspects of the problem to make the best decision. The research aimed to generate new insights through comparison and evaluation to learn about the uncertainty of decision process conditions towards designing and building quality into the software. 

Keywords: Bayesian Statistics, Statistical Models, Uncertainty, Software Engineering, Decision


How to Engage the Strategic Mind

Abstract
The paper presents lessons learnt in applying a dynamic approach to scanning and scenario building of the smart mind in an organisational context in the UK. The application of the method directly demonstrates the assessment and scanning of the case study organisation. The paper highlights the approach to engaging diverse stakeholders and groups in smart thinking strategies about the present and future. The methodology and approach also describe the direct fragmentation of data in order to detect patterns of meaning and probability. The patterns obtained from this represent the indirect notion of accepting or rejecting the uncertainty of information bounded through complexity and evolving environments.
 
The organisation is an entity of the UK involved in a wide range of businesses that look after the community and services for the public interest, heavily driven by financial investment, policy and service orientation. Between 2009 and 2012, the organisation was caught in the global change, and trends dramatically impacted many areas of the business. The organisation’s representation of the UK market was under a market downturn after the intense European economic bailout was increasingly impacting the UK budget. 

The five competitive forces in the operating environment driven by consumer needs and demands also changed the organisation internally. The organisation restructured through downsizing and reorganising itself as different wavelengths of consumer requirements approached. Concerns over mounting consumer requirements increased the product and service portfolio, but there were concerns over the mounting uncertainty and lack of infrastructure across the organisation model.

Keywords: Strategic, Smart Mind, Decision Making


Framing the Domain Problem - Contextual Influences on Content Interaction


This paper is organized as follows. Section 1 introduces an empirical case study and the motivation for this work. Section 2 provides an overview of the data and its collection process, followed by section 3, introducing the concepts of Bayesian statistics techniques and their application to the data alongside the application to the proposed research objectives and conducting analysis of the results. Variables can be considered quantities that assume a variety of values in a specific problem (Eastham, 1991).

A problem statement can be accurately represented by equations and be represented in mathematical shorthand for Y is a function of X. This is a general form equation where function means “is determined by”. Subsequently, Y is determined by X; however, this does not mean Y is mathematically dependent upon X. If the value is assigned to X, then the value of Y could be calculated. 

A problem statement occurs in a domain which will be assigned as f. Consequently, the function’s domain can be represented as Y = f(x) and placed in the form of Y = f (x1, x2, x3,… xn). The f(x) here is a range of elements being created as part of the function of which is associated with explicit and implicit. Y is an explicit function as there is a definite rule specifying how Y’s value (outcome) is determined by the value (arbitrarily) chosen for X. It also reflects that Y is dependent on a series of X values. However, if the values of X and Y were related and not independent of one another, then this is implicit and expressed as f(x,y)=0. Here are the values of X and Y; a mutual relationship between them and the variable determine each other. 

The Case Study
An example of a problem is applied whereby:

Let Qd be the quantity of orders, and the following are also concerned where:
Oa Introducing of online advertisement
Pc Product Choices
N The population

Then the first hypothesis is that quantity orders created depends on the amount of advertised and the types of products available, and population can be written as the explicit function of Qo = f (Oa, N). The number of orders perceived is a function of the online marketing advertisement and population. i.e. orders depend on advertisement and population. Notice how the independent variables are inside the brackets. The next hypothesis can be depicted as Oa =g(Pc); that is, the marketing is a function (depends on the number of products). 
These expressions have been reintroduced to illustrate the analysis. The two variables have represented determine each other in implicit form; the function here can be denoted as 

G (Pc; Oa) = 0

By placing the 2 models together, 2 equations and the definition of dependent (N = independent variable) are derived. The above equations can also be represented in diagrammatic form.

C  = a + bY (Online advertisement)
C1 =a1 + b1Y (Offline advertisement)

The Differential Calculus

Calculus is concerned with the rates of change. The hypothesis is that online presence advertisement increases sales orders; orders are a function of an online presence. Orders = f(marketing). The variable orders (o) depends on the variable of online marketing. Altering marketing will bring and draw some changes towards the number of orders by a certain amount. This amount is expressed with d, and therefore, as marketing (M) is the focal point, this will be dM. 

If M changes in size by dM, then the magnitude becomes m+dM. If the marketing changes, then the number of orders will also change, affecting the result in order (O), which can be written as a differential as well dO and therefore be depicted on the value of o+dO. However, this does not mean that dO will necessarily move in the same direction as d(M). The figure below illustrates a possible relation between marketing (M) and (O).

Keywords: Domain Problem, Complexity, Contextual, Content, Mathematical Formula

Flow Analysis

Abstract
Numerous research studies have examined what factors affect human decision-making, a topic that has long interested researchers. Many claims that unconscious processes rather than conscious, logical, rational decision-making account for a major portion of human decision-making. This is our capacity to recognise and respond to what we refer to as flow analysis. Thus, this flow analysis creates another avenue for communication that develops on interpersonal interactions rather than spoken exchanges. Accordingly, some sincere signals are challenging to forge and thus offer a window into our intentions, goals, and values. 

As a result, flow analysis of signalling behaviours like the degree of influence, activity, mimicry, and consistency can be used to anticipate the outcome of decision-making with high levels of accuracy. For instance, a person's level of interest can be determined by observing how active they are throughout a conversation (in terms of their volume, frequency, and hand gestures). Similar to how mimicry can accurately express empathy, the degree of influence a person has in a discussion can accurately indicate attention. Additionally, since a person's behaviour is a function of their social network, we can gain access to tacit knowledge dispersed throughout the network and build a type of network intelligence that will help us make better decisions by tuning into the pattern of social signalling. Therefore, by concentrating on these sincere flow analyses, we ought to better comprehend group behaviour.

Every study has emphasised how multimodal process modelling forms the foundation of human communication using Claude Shannon's mathematical communication theory. These indicators indicated communication issues if they strayed from expected patterns. The pattern of deviation revealed the nature of the issue and offered solutions. Furthermore, barriers and problems may be indicated by the absence of connections. Therefore, Flow analysis is extremely important in "normal" and dysfunctional communication. They offer simple ways to manage the stakeholders' relationship with its user and their current mission. However, humans and groups or communities require prior knowledge to comprehend and generate flow behaviour. To sum up, this research area integrates the findings of various stakeholders in order to advance our comprehension of flow analysis process modelling and to inform the design of social groups and communities that interact.

Summary
Human interaction is seamless, effortless, and devoid of overt effort. The foundation of this incredibly efficient communication is flow analysis. Occasionally, interactions go badly. The encounter is still seen as fluid, though, if issues are quickly fixed. The study explains how we might use community intelligence to improve ourselves by analysing flow movement.  

Keywords: Flow Analysis, Communication Theory, Mathematical, Group Behaviour


Architecting Smarter Services with Big Data Analytics

Abstract
The UK public sector pharmaceutical healthcare industry is ailing and in need of help; £ 3.6 billion is spent annually on pharmaceutical companies. Today’s dynamic markets, the public sector, and healthcare in the UK are under significant and unprecedented pressure to improve productivity, quality and embrace. Despite this enormous investment and the magnitude of the opportunity for public pharmaceutical healthcare to both do good and well, all too many efforts fail because of limited time and energy spent on innovation development.

The Government, the public sector healthcare, society associations, business-to-business and stakeholders are inter-dependently a business chain model. However, a salient point, there is a need for overcoming vertical and horizontal obstacle integration of activities required for analysing predictive and future performance.

The aim of this paper is two-fold. Firstly, the paper investigates the linkages and relationships between strategy and operations in pharmaceutical improvement efforts by examining the findings of uncertainty and new competitive intelligence.  Secondly, the aim is to use this information to postulate an ecosystem model, as a way to achieve new products and services innovation impact. This research undertakes an exploratory approach consolidating structured and unstructured data, using Visual Decision Making (VDM) the authors have developed; to visualise new competitive intelligence and how operation and performance management prospects contribute towards strategic management.

The visual modelling findings indicate a need for improved integration across operations to transform a healthcare organisation's service and technology innovation level. The exploratory study finds that substantial evidence from the pharmaceutical healthcare case does not appear to be adopting intelligence impact as rapidly as expected, not least because of the lack of understanding and rationale impact of emerging industries. The paper suggests that business ecosystem excellence can offer a strong foundation to develop new strategies, activities and behaviour change in a healthcare organisation.

The structured integration of the paper is split into three sections. Firstly, a problem case background is described.  Secondly, building and labelling competitive attributes, respectively, is considered. Thirdly, a relative predictive forecast analysis by combining and mapping these data sources with VDM. Finally, a further recommendation is subsequently addressed for future prospective works.

Keywords: Big data, Analytics, Complexity, Performance, Visual modelling


From Complexity to Using Big Data to Understand Crime: Data Enhanced Predictive Modelling for Targeting Crime Analysis

Abstract
Crime and public safety are a top societal concern, and strengthening community relations is a prerequisite. The security force is inundated with operation tasks of combating the use of forged, counterfeit and fraudulently obtained behaviours of those that commit crimes. With government budget cuts and limited resources, as crimes rise and processes become more complex, it is becoming increasingly challenging to manage crime rates daily. The ability of the security forces to assess, manage, connect, make sense and understand data to create actionable insights is critical to improving performance and competitive outcomes.

The operations teams are experienced and intuitive. However, sustaining and combating crime requires rigorous analysis from multiple data sources in today's complex environment. Multiple sources of data are crucial artefact features of information. This is considered one of the most valuable assets for producing and controlling crime, effectively deploying resources, and maintaining security professionals. With the widespread online and offline information today, people's activities generate more information than before. 

Crimes are demonstrated through the activities, behavioural issues raised, and operational response. This paper aims to assess the applicability of a volume of crime performance management through crime analysis using qualitative and quantitative data in the operations process in a security services context. Using a Visual Decision Making (VDM) structured exploratory approach developed by the authors, the technique is applied in security services. 

This paper examines how a security force can use over 10,000 data sets to identify, tackle and reduce crime. The paper seeks to positively match investigated identification of potential prevention, disruption and organisation of security enforcement opportunities regarding the distribution of skills and resources. The analysis suggests multiple sources present a picture's conditions to undertake the appropriate and best decision-making towards anticipated associated risk. Identifying key patterns helps the force run efficiently and effectively and helps the community obtain the best possible service, given the limited resources available. 

The structure of the paper is split into three sections. Firstly, a problem case study background is described for the research approach. Secondly, the challenges of big data analytics in supporting decision-making in crime services and identifying the relationship of criminal activities. Thirdly, the methodology was undertaken by combining and mapping data sources using VDM. Finally, a further recommendation is subsequently addressed for future prospective works.

Keywords: Big data, Analytics, Artefacts Behaviour, Crime, Modelling.

Identifying bundles of crimes using big data – Accelerating value and innovation

Abstract
The security force is under significant pressure to reduce costs and drive efficiency within resources while improving society services. With rising demands to deliver changing consumer expectations, making connections between time, places, and people is vital. There is a need to work in partnership with both the public and private (Clarke and Newman, 2007). The work evolved from Kahlon and Tse's (2011, 2012, and 2013) research on addressing critical business issues analytically. This aims to convert the data sets into a succinct visual representation.
The paper is split into three sections; the first presents a case definition. Secondly, the use of big data and applying analytics is looked into in more detail at how data can harness decision-making power to provide actionable intelligence. Thirdly, a mapping analysis combining the data sources is presented. The last section provides a summary and future prospective work to expand further.

Spatial analysis is an important factor analysis for detecting and predicting crime. This involves looking at characteristics of the environment and places where crime may occur. It is imperative to look at information such as parcel deliveries, locations of schools, the network of streets and parks, burglary hot spots, bin locations and car parks. The goal of capturing this information enables the key questions of where, how, when and which are indicators that attract such criminal activity behaviour to occur. Identifying crime is a key theme and remains a top concern for society's safety. Although big data is still in the early stages, it is claimed big data collectively can address problems related to variability in crime quality. The security force is under significant pressure to reduce costs and drive more efficiency within resources whilst improving services and protection of society and communities with reassurance (Innes, 2004; Morris, 2006). 

 
The challenge is to move beyond analysis using historical data to understand better and profile crime activity and behaviour in Northern Ireland. One of the largest uses of personnel and resources is patrolling, which is the backbone of the security services. Often, officers patrolling simply react and respond to a situation as opposed to a more proactive approach. The presence of local police officers performing their operations is underused and inadequately equipped with decisive technological intelligence. Traditionally, bureaucratic administration demands paperwork and forms be completed during an activity of crime and, most importantly, to ensure they contain detailed information (Clarke and Erk, 2003). The purpose of this research is to draw upon big data concepts and their role in supporting policing during day-to-day operations in order to address the issue of reducing crime and being ready, and knowing before the crime happens.



How Smart is your Operations with Big Data?
Smarter Operations with Big Data

Abstract
To prosper in today's ever-changing world, the healthcare industry too must change. There is a need to move beyond their existing silos. Rising costs and budget cuts by the government are forcing massive changes in the healthcare industry. The UK public sector healthcare is under pressure to deliver an estimated £20 billion in efficiency savings.  In the era of intense competition and consumer volatility, healthcare organisations need strategies for obtaining the best value out of data.  

The paper explores how the role and use of data can create insights through applied analytical disciplines highlighting the value that data and analytics provide to enhance underpinning new productivity growth within operations by managing innovation, cost and complexity. This involves applying statistical, contextual, qualitative and quantitative, predictive modelling behaviour to drive fact-based planning, decision-making, and change management as well as measurement and learning. It studies these links in the healthcare domain, a case study looking at improving decision-making and driving actionable results within forecasting, budgeting and supply chain management, specifically into how services can be improved whilst reducing costs.

Moreover, predictive modelling findings indicate a need for integration across operations to transform a healthcare organisation's information transfer service and technology level. The three predictive characteristics establish, firstly, where real business and decision-making impact can take place. Secondly, under-looked areas and thirdly, opportunity costs. These help deepen the understanding of the challenges and challenges associated with the wealth of data available.

Three levels of analytics are explored, (a) aspirations, (b) experience and (c) transformation. Experience comprised of looking at the day-to-day activities driving the operations.  This looks at behaviour modelling techniques. The transformation entailed looking at the activities which guide the day-to-day operations, which also includes a focus on unstructured data.

The structure of the paper is split into three sections. Firstly, a review of the case study background is presented.  Secondly, the research methods are outlined in how the research topic role of big data and analytics in driving decision-making emerged. This will be followed by a description of the research approach undertaken. Thirdly, a relative predictive forecast analysis by combining and mapping these data sources using Visual decision-making the authors have developed and management implications. Finally, a further recommendation is subsequently addressed for future prospective works.


Keywords: Big data, Analytics, Behaviour, Performance, Predictive modelling







A Framework for Visual Decision-Making Modeling Intelligence Behaviors

Abstract
In one of the first applications of measuring the intelligence of data science in the security force, a tool created by the authors has been implemented to measure and analyse the use of quantitative and qualitative data of data identities in criminal activities. The security force is inundated with operation tasks of combating the use of forged, counterfeit and fraudulently obtained behaviours of those that commit a crime. In alignment with law enforcement and Government, this paper used a simulation eco-model tool developed by the authors and evolved from working with the security forces. 

The paper seeks to positively match investigated identification of potential prevention, disruption and organisation of security enforcement opportunities in terms of the distribution of skills and resources. As well as identifying the interrelated relationship of such criminal activities, confirmed fraudulent behaviours or illegal activities and vulnerable people are attached to these. With the help of the Government, enabling information sharing and availability of statistical data helps to disseminate and drill down the data sets further. Furthermore, challenges are depicted comprehensively visually, and decision-making strategy behaviour is formulated to determine the overall impact analysis on the organisation and society.

Keywords: Visual Decision Making, Simulation, Modelling, Intelligence, Behavior, Pattern Analysis


Development of Ontology-Based for Turning Dynamic Data into Great Healthcare Strategy, Productivity & Performance

Abstract
As the number of content and users grows, it is becoming increasingly difficult for organisations and users to find meaningful content relevant to their particular needs. Organisations and communities are facing new opportunities and challenges landscape. They need flexible methods to manage dynamic content production and delivery to their diverse stakeholders, and the goal of using attribute metadata is a key enabler in making this happen. 

The production and use of content rely heavily on the automation of computers. This requires content to be enhanced with dynamic representations of semantics. However, semantic data is only helpful for various purposes if the characteristic structure(s) is understood and defined according to the domain. The core of the problem is whether the user requires diverse levels of content; the content context is presented the same way. 

Consequently, an ontology comprised of conceptual modelling that maps content into meaningful concepts is required. This paper introduces the development of an ontology-based technique rooted in the core of healthcare medications that can be used to define metadata structure for digital content. This paper presents the application of the ontology development framework in an organisation where healthcare companies explore new opportunities for medications and content delivery.

Keywords: Ontologies, Dynamic Information Content, Visual Ecosystem, Complex E, Information Technology (IT)

Managing & Modelling Uncertainty through Visual Decision-Making Strategy Behaviour

Abstract
As the demands on needs and requirements increase and the market force continues, many organisations focus on improving efficiency and managing risks to provide attractive solutions. Often, managing market risk is treated as an issue of compliance, deriving rules and ensuring the employees adhere to and be bounded by them. However, this will not diminish the likelihood or the impact of disaster nor prevent failures. In this paper, a simulation structure of the model study is presented. It looks at how organisations can identify and prepare for deriving naturalistic on managing risk and governing risk management that arises internally and externally to an organisation’s strategy and operations. Three categorisations of complex risk are presented, allowing an organisation to measure through a visual-based model (eco model). The eco model examines the application challenges of individuals and organisations in anchoring these discussions in their risk strategy formulation and implementation processes.

The eco model lends itself to being carried out in two new phases of categorisation of managing risk. The first is targeting and determining the best concept strategies; the second phase is the visual context development stage. This two-phase approach ensures organisations can perceive the most efficient use of the content. The key to success in deploying the model is to realise that a logical approach can be optimised from both top-down and bottom-up perspectives (layers of the organisation), which is seldom the most realistic in practice. Subsequently, a scoping phase of the decision-making establishes an appropriate working model to represent the organisation in alignment with overall objectives and goals. Stakeholders and organisations can anchor these strategic risk choice formulations from top-down and bottom-up approaches, an inherent dynamic enabling constructive discussion.

Simulating Visual Decision Making (VDM) models (eco-models) can have many uses, and often the benefits from the eco-models in an organisation or a complex chaotic problem. The model provides three level categories of details that can be applied to any organisation, either qualitative, quantitative or both. The level of these benefits depends upon constraint satisfaction impact, including the complexity of the problem and variability levels of internal and external on historical and predictable information. For these, simulation can evaluate the impact based on assigned variables and determine the overall impact analysis.

Keywords: Visual Decision Making, Simulation, Modelling, Risk, Uncertainty, Complex

RISK & UNCERTAINTY

Identifying risk challenges is tougher than identifying and influencing what drives market attitudes and behaviour. The Japan Tsunami described above was hard to untangle the instant dilemma behaviour. In order to be able to manage risk, the first step is to create a visual effect and assign specific qualitative and quantitative risk measures that organisations face. After conducting field research, three core risk category events are established: preventable, strategy and external. These three categories can harm an organisation’s objectives, strategy and survival. 

Types of Risks
The first type of risk, preventable risks, are explicit logical internal risks, such as routine operational processes that are avoidable, controllable, and should be culturally eliminated. This type of risk is systematically and actively managed through monitoring and setting the organisation’s values and ethical behaviour of what is and is not acceptable. This is, therefore, at a bundled state that is controllable and bounded by rules and constraints with comparing certainty effectively. The certainty here is of high probability but of low risk.

Strategy Risks are the second type of category where an organisation voluntarily accepts a risk to generate a greater substantial return from the strategy choice undertaken. Strategy risks can be more difficult at this stage and are illogical implicit structures. The structures comprise unbundled thoughts that materialise from fundamental decisions concerned with connecting the objectives and goals of an organisation. 

The third risk is external risks concerned with the external environmental conditions that may arise. External factors are more difficult to acquire and are more important for decision-making. Bremmer (2005) highlights macroeconomic, political and natural risks are key examples of distinction in this scenario point; possible impacts and opportunities awareness are pursued by unbundling identification to evolve assurance and minimise risk. This is due to the low probability that attracts and creates high arbitrage impact consequence risks.

Simulating Uncertainty and Complex Risk Areas
Organisations need to be protected daily, both internally and externally. Organisations generate activity risks that could offer advancement and lead in the market. This can be done by recognising risk benefits by monitoring and improvement of performance through the aid of VDM. VDM can be simulated and applied to the following areas in uncertain and complex risk areas.



Innovating with People Analytics for Operational Intelligence


Abstract

Modelling intelligence is crucial for many organisations to sustain today’s competitive environment. Organisations are faced with ever-growing amounts of structured and unstructured data and are exploring ways to transform this information. The pace of the healthcare industry, specifically business operations, is speedier than ever. Transactional events occur constantly, all day and every day. The challenge is to make sense of these data and unlock the potential. People analytics and data are drawn from an unestablished process event. Modelling the operational process enables building intelligence surrounding a set event focused on gathering information, support required, and current services being delivered. 

The paper highlights and discusses a model developed by the authors. The research was designed to investigate people analytics with operational intelligence practices and the potential benefits of improving existing processes within a healthcare organisation. The paper is split into three sections. Firstly, a literature background overview is provided. Secondly, a problem case study background is described. Thirdly, a predictive forecast analysis is introduced by combining the literature theory and modelling the case study. Finally, a further recommendation is subsequently addressed for future prospective works.

Keywords: People analytics, Operational Intelligence, Performance, Data.


Impact of Digitalisation Techniques to Transform Decision Making Towards Transcription Mistakes Improving Medicines Management 

Abstract
Quality problems on hospital ward order sheets may be a source of medicine errors, and these physical transcribing mistakes of medications may, to a significant extent, affect patient safety and may potentially cause severe harm. Recent studies on adverse events in the transcription of medication orders have shown that errors in transcribing are not rare. Inpatient ward orders are required for completion, which is needed to inform the next process involved in a Patient’s Medicines Management. Recent studies and research show many hospitals are seeking increasingly digital technologies to improve the support in reducing the time spent in transcribing orders.
 
This paper provides a perspective, both empirical analysis and observational study in a hospital environment, through applying statistical quality techniques to understand cognitive behaviour. The main objective of the paper is to investigate the feasibility of introducing digitalization, electronic order to sheet to facilitate transcribed medication requests at the ward level. The secondary objective was to conduct an exploration by assessing the impact of transcribing through the classification of potential mistakes and how quantifying the quality of these can assist in engineering software design.

In a representative, 353 sample data of ward order sheets were analyzed in order to identify and classify the categories of mistakes and the risks associated with these. Transcription errors were detected by checking discrepancies, and error incidence was calculated by type or error, directions and their potential implications.

The paper is split into three sections. Firstly, a background setting of a process followed by an outline of objectives and methods to identify the error is provided. Secondly, discussion of results and typical errors depicted and how quality improvement could be deployed and change be engineered into an ideal solution through automated technology. Lastly, the paper provides a summary and future prospects to evolve the findings.

Keywords: Statistical Technique, Impact Analysis, Transcribing Error Prevention, Medicines Management, Structured and Unstructured Data


Correlating Communication Patterns with Performance and Productivity

Abstract
This paper studies communication patterns of online communication in a hospital healthcare environment. It measures the productivity characteristics and performance attributes between a team collaborating offsite and onsite and seeks to identify correlations. The study uses flow analysis to identify the knowledge flow by analysing the accessible mailing list of the team. Productivity is measured in terms of performance and the productivity of the timeframe. Preliminary results indicate there is a correlation.

Introduction
Early studies an organisation performance and productivity were mainly focused on the role of teams within the workforce. This paper goes beyond the traditional management that seeks to explain performance and productivity. The coronavirus pandemic has changed the dispersed workforce dramatically over a period of time. 

Various studies exist, and despite the changes in the organisation’s workforce, not all collaboration, productivity and performance are equally successful. There are different streams of active researchers trying to better understand the reasons for the growing success of online communication. Some are testing a model of influences of positivity for performance and productivity, whilst other researchers are attempting to establish creativity. This paper studies the structural properties of performance and productivity used by stakeholders during service development and the effect of various work relationships. The paper further investigates the role of different patterns of relationships and the influence of the team. 
Performance and productivity differ in various organisations. This paper extends the earlier communication analysis work and reviews communication structure in email networks. This study aims to compare the communication structure with performance and productivity within a team. The communication structure is reviewed through one year’s worth of emails from the team. The output is measured regarding the wastage, value and turnaround time. 

Keywords: Performance, productivity, virtual community



0 comments:

Post a Comment