Download Free Using Z Book in PDF and EPUB Free Download. You can read online Using Z and write the review.

This IBM Redbooks publication shows the strengths of z/VM and how you can use these strengths to create a highly flexible test and production environment. Some of the strengths of z/VM that are shown in this book are that you can run Linux on z/VM, you can run a sysplex under z/VM, and you can develop code under z/VM for z/TPF. You can also provision Linux guests under z/VM. A vswitch allows you to connect all of your guests (all operating systems that run under z/VM) easily to the network. You can simulate your production environment on a sysplex. The intention of this book is to show the strengths of z/VM and how you can use these strengths to simulate your production environment and expand your application development and testing environments.
IBM® System z® servers offer a full range of connectivity options for attaching peripheral or internal devices for input and output to the server. At the other end of these connections are a variety of devices for data storage, printing, terminal I/O, and network routing. This combination of connectivity and hardware offer System z customers solutions to meet most connectivity requirements. However, to make use of these features, the System z server must be properly configured. This IBM Redbooks® publication takes a high-level look at the tools and processes involved in configuring a System z server. We provide an introduction to the System z channel subsystem and the terminology frequently used in the hardware definition process. We examine the features and functions of tools used in the hardware definition process, such as HCD, CHPID Mapping Tool, and HCM. We discuss the input and output of these tools (IODF, IOCP, IOCDS) and their relationship to one another. We also provide a high-level overview of the hardware configuration process (the flow of generating a valid I/O configuration). We provide configuration examples using both HCD and HCM. The book also discusses available new functions and guidelines for the effective use of HCD and HCM. This document is intended for system programmers and administrators who are responsible for defining and activating hardware changes to z/OS® and System z servers, and for the IBM representatives who need this information. General knowledge of z/OS and IOCP is assumed.
In this IBM® Redbooks® publication, we expand upon the concepts and experiences described in "An introduction to z/VM Single System Image (SSI) and Live Guest Relocation (LGR)", SG24-8006. An overview of that book is provided in Chapter 1, "Overview of SSI and LGR" on page 1. In writing this book, we re-used the same lab environment used in the first book, but expanded it to include IBM DB2® v10 on Linux on System z®, two IBM WebSphere® Application Server environments, and added a WebSphere application, used for performance benchmarking, which provided a workload that allowed us to observe the performance of the WebSphere Application Server during relocation of the z/VM® 6.2 member that was hosting the application server. Additionally, this book examines the use of small computer system interface (SCSI) disks in the z/VM v6.2 environment and the results of using single system images (SSI) and live guest relocation (LGR) in this type of environment. In the previous book, a detailed explanation of relocation domains was provided. In this book, we expand that discussion and provide use cases of relocation domains in different situations. Finally, because the ability to back up and restore your data is of paramount importance, we have provided a discussion about how to use one tool, the IBM Backup and Restore Manager for z/VM, which can be used in the new z/VM6.2 environment. We provide a brief overview of the tool and describe the changes in the installation process as a result of using single system image clusters. We also demonstrate how to set up the configuration file, and how to back up and restore both a user and an identity. This publication is intended for IT architects who will be responsible for designing the system and IT specialists who will have to build the system.
In a world where product lifespans are often measured in months, the IBM® Transaction Processing Facility has remained relevant for more than four decades by continuing to process high volumes of transactions quickly and reliably. As the title of this book suggests, the z/TPF system uses open, standard interfaces to create services. Integration of new applications with existing z/TPF functions is a key factor in extending application capabilities. The ability for service data objects (SDO) to access the z/TPF Database Facility (z/TPFDF) provides a framework for data application program development that includes an architecture and application programming interfaces (APIs). SDO access to z/TPFDF provides remote client applications with access to z/TPF traditional data. In the simplest terms, service-oriented architecture (SOA) is a means by which like, or unlike, systems can communicate with one another despite differences between each system's heritage. SOA can neutralize the differences between systems so that they understand one another. SOA support for z/TPF is a means by which z/TPF can interact with other systems that also support SOA. This book discusses various aspects of SOA in the z/TPF system, including explanations and examples to help z/TPF users implement SOA. IBM WebSphere® Application Server was chosen as the partner system as a means of demonstrating how a world class transaction server and a world class application server can work together. This book shows you how you can exploit z/TPF as a transaction server, participating in a SOA structure alongside WebSphere Application Server. This IBM Redbooks® publication provides an introduction to z/TPF and the technologies critical to SOA. z/TPF is positioned as a provider or consumer in an SOA by supporting SOAP processing, communication bindings, and Extensible Markup Language (XML). An example is used to show how z/TPF can be used both as a Web service provider and as a consumer. A second example shows how to use WebSphere Operational Decision Management to apply business rules. A third example shows how business event processing can be incorporated in z/TPF applications. An example is also used to discuss security aspects, including z/TPF XML encryption and the z/TPF WS-Security wrapper. The main part of the book concludes with a discussion of z/TPF in an open systems environment, including examples of lightweight implementations to fit z/TPF, such as the HTTP server for the z/TPF system. The appendixes include information and examples using TPF Toolkit, sample code, and workarounds (with yes, more examples).
IBM® z/VM® 6.2 introduced significant changes to z/VM with a multi-system clustering technology that allows up to four z/VM instances in a single system image (SSI) cluster. This technology is important because it offers you an attractive alternative to vertical growth by adding new z/VM systems. In the past, this capability required duplicate efforts to install, maintain, and manage each system. With SSI, these duplicate efforts are reduced or eliminated. Support for live guest relocation (LGR) allows you to move Linux virtual servers without disrupting your business or incurring loss of service, thus reducing planned outages. The z/VM systems are aware of each other and take advantage of their combined resources. LGR enables you to relocate guests from a system requiring maintenance to a system that will remain active during maintenance. A major advantage for DB2 v10 customers is that using z/VM 6.2 does not require any changes to existing DB2 structures. This remarkable benefit is due to the fact that DB2 v10 is installed as part of the LInux guest on z/VM and is fully integrated into LGR. This allows you to smoothly move DB2 v10 when you move Linux virtual servers, without interrupting either DB2 v10 or z/VM operations and services. This IBM Redbooks® publication will help you understand how DB2 10 on Linux for System z® behaves while running on a z/VM that is being relocated using z/VM's 6.2 Live Guest Relocation feature. In this book, we explore memory management, the DB2 Self-tuning memory manager feature, time synchronization, networking, and storage and performance considerations with regards to relocation. We also offer some best practices found during a live guest relocation for DB2 v10.
This book contains enough mnaterial for three complete courses of study. It provides an introduction to the world of logic, sets and relations. It explains the use of the Znotation in the specification of realistic systems. It shows how Z specifications may be refined to produce executable code; this is demonstrated in a selection of case studies. The essentials of specification, refinement and proof are covered, revealing techniques never previously published. Exercises, Solutions and set of Tranparencies are available via http://www.comlab.ox.ac.uk/usingz.html
Mainframe computers are the backbone of industrial and commercial computing, hosting the most relevant and critical data of businesses. One of the most important mainframe environments is IBM System z with the operating system z/OS. This book introduces mainframe technology of System z and z/OS with respect to high availability and scalability. It highlights their presence on different levels within the hardware and software stack to satisfy the needs for large IT organizations.
Data is one the most critical and valuable assets of a business. Critical strategic decisions can be made more quickly and effectively when they are based on complete, accurate, and timely operational data. From this point of view, it is important to have an enterprise data management architecture that supports a flexible global view of the business. Many environments today are heterogeneous with a high quantity and diversity of data. In this IBM® Redbooks® publication, we help enterprise architects and IT managers with these environments make decisions for a centralized database or data warehouse. We recommend a centralized data management environment on Linux® on System z®. We include guidance for IBM z/VSETM and Linux specialists to reorganize existing IBM DB2® VSE data and build a database environment with continuous operation in Linux on System z. We begin this book by describing the possibilities and advantages of enterprise data management and different technical ways to realize it. Then we discuss planning, which is important for setting the foundation of the architecture that is implemented. We explain the hardware considerations for capacity and performance planning. For the z/VSE system and Linux on System z, we describe considerations for operation in a logical partition (LPAR) and in a virtualized environment with IBM z/VM®. In addition, we discuss the disk behavior for different workloads, storage dependencies, network connections, and DB2 database considerations. We also guide you in customizing the DB2 server for z/VSE, z/VM, and DB2 on Linux to allow existing z/VSE and z/VM applications to access the database on Linux on System z. We include the data migration, application considerations, dependencies, compatibility, monitoring, and tuning possibilities in such an environment.
This IBM® RedpaperTM publication discusses the need to monitor and measure different workloads, especially mobile workloads. It introduces the workload classification capabilities of IBM z SystemsTM platforms and helps you to understand how recent enhancements to IBM MVSTM Workload Management (WLM) and other IBM software products can be used to measure the processor cost of mobile workloads. This paper looks at how mobile-initiated and other transactions in IBM CICS®, IMSTM, DB2®, and WebSphere® Application Server can be "tagged and tracked" using WLM. For each of these subsystems, the options for classifying mobile requests and using WLM to measure mobile workloads are reviewed. A scenario is considered in which a bank is witnessing a significant growth in mobile initiated transactions, and wants to monitor and measure the mobile channels more closely. This paper outlines how the bank can use WLM to do this. This publication can help you to configure WLM mobile classification rules. It can also help you to interpret Workload Activity reports from IBM RMFTM Post Processor and to report on the CPU consumption of different workloads, including mobile and public cloud workloads.