Download Free Quality Assessment Of Visual Content Book in PDF and EPUB Free Download. You can read online Quality Assessment Of Visual Content and write the review.

This book provides readers with a comprehensive review of image quality assessment technology, particularly applications on screen content images, 3D-synthesized images, sonar images, enhanced images, light-field images, VR images, and super-resolution images. It covers topics containing structural variation analysis, sparse reference information, multiscale natural scene statistical analysis, task and visual perception, contour degradation measurement, spatial angular measurement, local and global assessment metrics, and more. All of the image quality assessment algorithms of this book have a high efficiency with better performance compared to other image quality assessment algorithms, and the performance of these approaches mentioned above can be demonstrated by the results of experiments on real-world images. On the basis of this, those interested in relevant fields can use the results obtained through these quality assessment algorithms for further image processing. The goal of this book is to facilitate the use of these image quality assessment algorithms by engineers and scientists from various disciplines, such as optics, electronics, math, photography techniques and computation techniques. The book can serve as a reference for graduate students who are interested in image quality assessment techniques, for front-line researchers practicing these methods, and for domain experts working in this area or conducting related application development.
This book provides comprehensive coverage of the latest trends/advances in subjective and objective quality evaluation for traditional visual signals, such as 2D images and video, as well as the most recent challenges for the field of multimedia quality assessment and processing, such as mobile video and social media. Readers will learn how to ensure the highest storage/delivery/ transmission quality of visual content (including image, video, graphics, animation, etc.) from the server to the consumer, under resource constraints, such as computation, bandwidth, storage space, battery life, etc.
The book encompasses the state-of-the-art visual quality assessment (VQA) and learning based visual quality assessment (LB-VQA) by providing a comprehensive overview of the existing relevant methods. It delivers the readers the basic knowledge, systematic overview and new development of VQA. It also encompasses the preliminary knowledge of Machine Learning (ML) to VQA tasks and newly developed ML techniques for the purpose. Hence, firstly, it is particularly helpful to the beginner-readers (including research students) to enter into VQA field in general and LB-VQA one in particular. Secondly, new development in VQA and LB-VQA particularly are detailed in this book, which will give peer researchers and engineers new insights in VQA.
This argument is supported by extensive experimental comparisons of existing photo quality assessment approaches as well as our new features over different categories of photos. In addition, we propose an approach of online training an adaptive classifier to combine the proposed features according to the visual content of a test photo without knowing its category. Another contribution of this work is to construct a large and diversified benchmark database for the research of photo quality assessment. It includes 17, 613 photos with manually labeled ground truth. This new benchmark database will be released to the research community.
With multimedia research burgeoning, video applications have become essential to our daily life. However, as the compression becomes more aggressive, too much data loss results in degrading perceived video quality for viewers. Therefore, an accurate quality measurement is important to improve or preserve the quality of compressed video. This dissertation focuses on measuring the quality degradations that are caused by compression. We specifically target distortions with impact above the human perceptual threshold, which are also called artifacts. This type of distortion usually appears in a structured form. This characteristic makes quality assessment highly content dependent and many existing metrics fail in this regard. Some previous research has tried to raise the accuracy of video quality assessment by considering human visual system (HVS) effects, or human visual attention factors. However, both HVS and human visual attention have very strong interaction in the video quality assessment process, and none of the existing quality measurement research takes both of them into account. In addition, cognitive factors significantly influence the visual quality assessment process, but they have been ignored in current quality assessment research. Based on these realizations, a new video quality assessment philosophy is introduced in this thesis. It considers the characteristics of artifacts, effects from HVS, visual attention, and cognitive non-linearity. First, a new human visual module is proposed, it takes both visual masking and attention effects into account. Its unique design makes embedding this visual module in any video quality related applications very easy. Based on this new human visual module, a blurriness metric is designed which includes cognitive characteristics. This new blurriness metric does not rely on edge information, and is more robust at assessing heavily compressed video data. A metric for artifacts introduced by motion compensated field interpolation (MCFI) is also implemented. It is the first metric ever designed for measuring the spatial quality of temporally interpolated frames. From a temporal quality perspective, a novel temporal quality metric is designed to measure the temporal quality degradation caused by both uniform and non-uniform distributed frame loss. Experimental data shows these metrics significantly outperform the existing metrics.
Video is the main driver of bandwidth use, accounting for over 80 per cent of consumer Internet traffic. Video compression is a critical component of many of the available multimedia applications, it is necessary for storage or transmission of digital video over today's band-limited networks. The majority of this video is coded using international standards developed in collaboration with ITU-T Study Group and MPEG. The MPEG family of video coding standards begun on the early 1990s with MPEG-1, developed for video and audio storage on CD-ROMs, with support for progressive video. MPEG-2 was standardized in 1995 for applications of video on DVD, standard and high definition television, with support for interlaced and progressive video. MPEG-4 part 2, also known as MPEG-2 video, was standardized in 1999 for applications of low- bit rate multimedia on mobile platforms and the Internet, with the support of object-based or content based coding by modeling the scene as background and foreground. Since MPEG-1, the main video coding standards were based on the so-called macroblocks. However, research groups continued the work beyond the traditional video coding architectures and found that macroblocks could limit the performance of the compression when using high-resolution video. Therefore, in 2013 the high efficiency video coding (HEVC) also known and H.265, was released, with a structure similar to H.264/AVC but using coding units with more flexible partitions than the traditional macroblocks. HEVC has greater flexibility in prediction modes and transform block sizes, also it has a more sophisticated interpolation and de blocking filters. In 2006 the VC-1 was released. VC-1 is a video codec implemented by Microsoft and the Microsoft Windows Media Video (VMW) 9 and standardized by the Society of Motion Picture and Television Engineers (SMPTE). In 2017 the Joint Video Experts Team (JVET) released a call for proposals for a new video coding standard initially called Beyond the HEVC, Future Video Coding (FVC) or known as Versatile Video Coding (VVC). VVC is being built on top of HEVC for application on Standard Dynamic Range (SDR), High Dynamic Range (HDR) and 360° Video. The VVC is planned to be finalized by 2020. This book presents the new VVC, and updates on the HEVC. The book discusses the advances in lossless coding and covers the topic of screen content coding. Technical topics discussed include: Beyond the High Efficiency Video CodingHigh Efficiency Video Coding encoderScreen contentLossless and visually lossless coding algorithmsFast coding algorithmsVisual quality assessmentOther screen content coding algorithmsOverview of JPEG Series
This book covers the different aspects of modern 3D multimedia technologies by addressing several elements of 3D visual communications systems, using diverse content formats, such as stereo video, video-plus-depth and multiview, and coding schemes for delivery over networks. It also presents the latest advances and research results in regards to objective and subjective quality evaluation of 3D visual content, extending the human factors affecting the perception of quality to emotional states. The contributors describe technological developments in 3D visual communications, with particular emphasis on state-of-the-art advances in acquisition of 3D visual scenes and emerging 3D visual representation formats, such as: multi-view plus depth and light field; evolution to freeview and light-field representation; compression methods and robust delivery systems; and coding and delivery over various channels. Simulation tools, testbeds and datasets that are useful for advanced research and experimental studies in the field of 3D multimedia delivery services and applications are covered. The international group of contributors also explore the research problems and challenges in the field of immersive visual communications, in order to identify research directions with substantial economic and social impact. 3D Visual Content Creation, Coding and Delivery provides valuable information to engineers and computer scientists developing novel products and services with emerging 3D multimedia technologies, by discussing the advantages and current limitations that need to be addressed in order to develop their products further. It will also be of interest to students and researchers in the field of multimedia services and applications, who are particularly interested in advances bringing significant potential impact on future technological developments.
Last few years have seen rapid acceptance of high-definition television (HDTV) technology around the world. This technology has been hugely successful in delivering more realistic television experience at home and accurate imaging for professional applications. Adoption of high definition continues to grow as consumers demand enhanced features and greater quality of content. Following this trend, natural evolution of visualisation technologies will be in the direction of fully realistic visual experience and highly precise imaging. However, using the content of even higher resolution and quality is not straightforward as such videos require significantly higher access bandwidth and more processing power. Therefore, methods for radical reduction of video bandwidth are crucial for realisation of high visual quality. Moreover, it is desirable to look into other ways of accessing visual content, solution to which lies in innovative schemes for content delivery and consumption. This book presents selected chapters covering technologies that will enable greater flexibility in video content representation and allow users to access content from any device and to interact with it.
This Lecture book is about objective image quality assessment—where the aim is to provide computational models that can automatically predict perceptual image quality. The early years of the 21st century have witnessed a tremendous growth in the use of digital images as a means for representing and communicating information. A considerable percentage of this literature is devoted to methods for improving the appearance of images, or for maintaining the appearance of images that are processed. Nevertheless, the quality of digital images, processed or otherwise, is rarely perfect. Images are subject to distortions during acquisition, compression, transmission, processing, and reproduction. To maintain, control, and enhance the quality of images, it is important for image acquisition, management, communication, and processing systems to be able to identify and quantify image quality degradations. The goals of this book are as follows; a) to introduce the fundamentals of image quality assessment, and to explain the relevant engineering problems, b) to give a broad treatment of the current state-of-the-art in image quality assessment, by describing leading algorithms that address these engineering problems, and c) to provide new directions for future research, by introducing recent models and paradigms that significantly differ from those used in the past. The book is written to be accessible to university students curious about the state-of-the-art of image quality assessment, expert industrial R&D engineers seeking to implement image/video quality assessment systems for specific applications, and academic theorists interested in developing new algorithms for image quality assessment or using existing algorithms to design or optimize other image processing applications.
In recent years visual devices have proliferated, from the massive high-resolution, high-contrast screens to the tiny ones on mobile phones, with their limited dynamic range and color gamut. The wide variety of screens on which content may be viewed creates a challenge for developers. Adapting visual content for optimized viewing on all devices is called retargeting. This is the first book to provide a holistic view of the subject, thoroughly reviewing and analyzing the many techniques that have been developed for retargeting along dimensions such as color gamut, dynamic range, and spatial resolution.