Search Results

Search found 11 results on 1 pages for 'downsampling'.

Page 1/1 | 1 

  • Reducing moire when downsampling halftone comic images.

    - by drawnonward
    How can I reduce moire effects when downsampling halftone comic book images during live zoom on an iPhone or iPad? I am writing a comic book viewer. It would be nice to provide higher resolution images and allow the user to zoom in while reading the comic book. However, my client is averse to moire effects and will not allow this feature if there are noticeable moire artifacts while zooming, which of course there are. Modifying the images to be less susceptible to moire would only work if the modifications were not perceptible. Blur was specifically prohibited, as is anything that removes the beloved halftone dots. The images are black and white halftone and line art. The originals are 600 dpi but what we ship with the application will be half that at best, so probably 2500 pixels or less tall. So what are my options? If I write a custom downsampling algorithm would it be fast enough for real time on these devices? Are there other tricks I can do? Would it work to just avoid the size ratios that have the most visual moire effects? As you zoom in an out, there are definitely peaks where the moire effects are worst. Is there a way to calculate what those points are and just zoom to a nearby scale that is not as bad? Any suggestions are welcome. I have very little experience with image and signal processing, but am enjoying the opportunity to learn. I know nothing of wavelets and acutance and other jargon, so please be verbose.

    Read the article

  • Downsampling and applying a lowpass filter to digital audio

    - by twk
    I've got a 44Khz audio stream from a CD, represented as an array of 16 bit PCM samples. I'd like to cut it down to an 11KHz stream. How do I do that? From my days of engineering class many years ago, I know that the stream won't be able to describe anything over 5500Hz accurately anymore, so I assume I want to cut everything above that out too. Any ideas? Thanks. Update: There is some code on this page that converts from 48KHz to 8KHz using a simple algorithm and a coefficient array that looks like { 1, 4, 12, 12, 4, 1 }. I think that is what I need, but I need it for a factor of 4x rather than 6x. Any idea how those constants are calculated? Also, I end up converting the 16 byte samples to floats anyway, so I can do the downsampling with floats rather than shorts, if that helps the quality at all.

    Read the article

  • Downsampling the number of entries in a list (without interpolation)

    - by Dave
    I have a Python list with a number of entries, which I need to downsample using either: A maximum number of rows. For example, limiting a list of 1234 entries to 1000. A proportion of the original rows. For example, making the list 1/3 its original length. (I need to be able to do both ways, but only one is used at a time). I believe that for the maximum number of rows I can just calculate the proportion needed and pass that to the proportional downsizer: def downsample_to_max(self, rows, max_rows): return downsample_to_proportion(rows, max_rows / float(len(rows))) ...so I really only need one downsampling function. Any hints, please? EDIT The list contains objects, not numeric values so I do not need to interpolate. Dropping objects is fine.

    Read the article

  • mp3 downsampling / compression in java

    - by veenit33
    Well, i was looking forward to modify the bit rate of a mp3 file in java. I want to downsample(change its bit rate) the mp3 file from 256/384 kbps to say 64/128 kbps.. (I guess this is the only way one can achieve mp3 compression..or is there any other way.?) I searched for LameOnJ but that website is temperoraly down and so im not able to get the license file which we need to download in every 2 days. Is this possible using JMF..? What are the other option i have..? Regards, Veenit Shah

    Read the article

  • When running Adobe Acrobat's OCR on a PDF document, which downsampling produces a higher quality: 600 dpi or 72 dpi?

    - by Ricardo Altamirano
    I have a large PDF document that consists of scanned pages of a textbook. I want to run Adobe Acrobat 9's text recognition function on it, but I'm presented with this menu when I do. I'm confused by the options in the highlighted menu. What option will produce the highest quality/most readable text? I thought 600 dpi implies a higher quality image than 72 dpi, so I'm confused by "High (72 dpi)" and "Lowest (600 dpi)."

    Read the article

  • How do I know if my system is capable of playing 24bit/96kHz sound?

    - by Igor Zinov'yev
    Let me state for the record that I'm a total noob when it comes to Hi-Fi sound systems, but I am rather picky about the sound quality. Normally I listen to CD recordings ripped to FLAC in 16/44, but I have several albums that are also ripped from vinyls to FLAC in 24/96. But it seems that I can't tell the difference between 16-bit and 24-bit versions (except for some vinyl noises, of course). That can be due to several reasons: my equipment (onboard audio, monitor headphones) isn't good enough to make any difference, my system is not playing audio in 24-bit 96 kHz, I am physically unable to hear the difference. So here is my question, how do I tell if my system can play 24-bit sound with 96 or 192 kHz resolution? And if it can, how do I tell that it plays it instead of downsampling to 16-bit / 44 kHz? Also, what hardware (audio cards, amplifiers, etc.) would you recommend to play such recordings on Ubuntu?

    Read the article

  • PDF rendering of images seems to vary from viewer to viewer with blurring?

    - by AndyL
    I'm generating PDF figures in Adobe Illustrator CS5 that include embedded images. I've noticed that the images look dramatically different when I display the same PDF in Preview, Skim or Adobe Reader (I'm on OS X). See screenshots. Adobe Reader displays them "correctly" while Skim and Preview blurs the image out each in a different way. Is there a setting I can set when saving my PDF from Illustrator so that the images are displayed correctly in Skim and Preview? The PDF was generated in Illustrator and saved without any compression or downsampling. The original PDF is here: http://ge.tt/8iZMR2A Adobe Reader 9 Skim 1.3.18 Preview 4.2 Super User's client-side PDF renderer

    Read the article

  • Capture documents in bitonal, or grayscale then downsample

    - by Jason R. Coombs
    I'm about to embark on a document archival process. I'm going to spend a lot of good money to archive some paper (actually microfiche) to TIFF images. I have a choice of 300-dpi bitonal (2-bit, black/white) or 300-dpi grayscale (8-bit). Cost is the same for either format. Data volume (and thus image size) is not a factor. It seems to me that the grayscale, since scanned at the same resolution as the bitonal, would always contain more information and could always be downsampled to the equivalent bitonal image. Are there any downsides to selecting grayscale, and then later downsampling to bitonal if desired? In other words, is it possible that the scanning software will perform a more accurate (or more legible) representation than a grayscale image converted to bitonal?

    Read the article

  • Help with Neuroph neural network

    - by user359708
    For my graduate research I am creating a neural network that trains to recognize images. I am going much more complex than just taking a grid of RGB values, downsampling, and and sending them to the input of the network, like many examples do. I actually use over 100 independently trained neural networks that detect features, such as lines, shading patterns, etc. Much more like the human eye, and it works really well so far! The problem is I have quite a bit of training data. I show it over 100 examples of what a car looks like. Then 100 examples of what a person looks like. Then over 100 of what a dog looks like, etc. This is quite a bit of training data! Currently I am running at about one week to train the network. This is kind of killing my progress, as I need to adjust and retrain. I am using Neuroph, as the low-level neural network API. I am running a dual-quadcore machine(16 cores with hyperthreading), so this should be fast. My processor percent is at only 5%. Are there any tricks on Neuroph performance? Or Java peroformance in general? Suggestions? I am a cognitive psych doctoral student, and I am decent as a programmer, but do not know a great deal about performance programming.

    Read the article

  • Subband decomposition using Daubechies filter

    - by misha
    I have the following two 8-tap filters: h0 ['-0.010597', '0.032883', '0.030841', '-0.187035', '-0.027984', '0.630881', '0.714847', '0.230378'] h1 ['-0.230378', '0.714847', '-0.630881', '-0.027984', '0.187035', '0.030841', '-0.032883', '-0.010597'] Here they are on a graph: I'm using it to obtain the approximation (lower subband of an image). This is a(m,n) in the following diagram: I got the coefficients and diagram from the book Digital Image Processing, 3rd Edition, so I trust that they are correct. The star symbol denotes one dimensional convolution (either over rows or over columns). The down arrow denotes downsampling in one dimension (either over rows, or columns). My problem is that the filter coefficients for h0 and h1 sum to greater than 1 (approximately 1.4 or sqrt(2) to be exact). Naturally, if I convolve any image with the filter, the image will get brighter. Indeed, here's what I get (expected result on right): Can somebody suggest what the problem is here? Why should it work if the convolution filter coefficients sum to greater than 1? I have the source code, but it's quite long so I'm hoping to avoid posting it here. If it's absolutely necessary, I'll put it up later. EDIT What I'm doing is: Decompose into subbands Filter one of the subbands Recompose subbands into original image Note that the point isn't just to have a displayable subband-decomposed image -- I have to be able to perfectly reconstruct the original image from the subbands as well. So if I scale the filtered image in order to compensate for my decomposition filter making the image brighter, this is what I will have to do: Decompose into subbands Apply intensity scaling Filter one of the subbands Apply inverse intensity scaling Recompose subbands into original image Step 2 performs the scaling. This is what @Benjamin is suggesting. The problem is that then step 4 becomes necessary, or the original image will not be properly reconstructed. This longer method will work. However, the textbook explicitly says that no scaling is performed on the approximation subband. Of course, it's possible that the textbook is wrong. However, what's more possible is I'm misunderstanding something about the way this all works -- this is why I'm asking this question.

    Read the article

  • BluRay audio/video stuttering with PowerDVD 11, WinDVD 11 Pro, etc? Xonar/Auzen HD audio option?

    - by jrista
    I recently upgraded my Windows 7 MediaCenter HTPC due to a motherboard failure (really old motherboard and cpu, it was on its last legs.) I chose to upgrade to an i5 system with everything built into the motherboard. I did my due diligence, researched, and found some hardware that was within my budget. I ended up with: Core i5 2500K (3.3Ghz) Corsair XMS3 2x2Gb DDR3 (4Gb) ASUS P8H 61-M LE/CSM MicroCenter 64Gb SSD (Previous BluRay player, forget the brand) The system is pretty awesome, and plays everything I have perfectly. I almost went with an Atom solution, however there have been numerous notes that they do not play NetFlix Instant Watch well...and I am a heavy Netflix IW user. High definition BluRay rips work well, although they usually contain lower audio quality than the BluRay's they were ripped from. The real problem I am encountering is playing back BluRay video from discs. For some reason, I am encountering rather terrible stuttering problems with both the audio and video. The stuttering is synchronous in both, and occurs at seemingly random intervals. I've used PowerDVD 9, PowerDVD 11 trial, and WinDVD 11 Pro trial. All three have stuttering problems, although PowerDVD 11 seems to have the least. Watching system resource usage, CPU load is never above 20%, and memory usage tends to be a constant 1/3rd the total available system memory. When playback is fine, its superb...the video is crystal clear. The audio quality is ok, certainly not what I would expect from a BluRay disc. I did some research, and it seems that playing BluRay from a PC causes a downsampling of the audio? I am curious if the audio is my primary problem here, the cause of the stuttering I am encountering? When stuttering occurs, the audio gets REALLY bad, while the video just pauses momentarily every second until for whatever reason everything picks up and runs fine (usually after a few seconds to a couple minutes.) The audio chipset is a Realtek HD ALC887 8-channel, supposedly designed to support BluRay playback. Has anyone encountered any issues like this playing back bluray discs on a PC (namely with PowerDVD...WinDVD was FAR worse, and seemed to have real trouble even reading the discs, and I have no interest in fiddling with it further.) Is there any reason to suspect the video decoding as the problem?(Given how bad the audio gets during a stutter, and how clean the video remains, I am inclined to think the issue boils down to audio.) Is it even remotely possible that the motherboard, cpu, or ram are causing the stuttering (all three are pretty blazing fast...faster than the hardware that I replaced, which seemed to play BluRay fine with PowerDVD 9.) I've read a bit about the Asus Xonar HDAV 1.3 and the Auzen X-Fi HomeTheater HD home theater hi-fi audio cards. Seems they are the only way to get true full-quality, uncompressed BluRay audio bitstreaming over HDMI on a PC. None of the usual suspects seem to have these cards in stock, however. Are these cards worth getting? Are they even still available, or have they been discontinued (if so, that would indeed be sad...they sound simply fantastic.)

    Read the article

1