Search Results

Search found 9901 results on 397 pages for 'audio processing'.

Page 243/397 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • Limiting Sybase ASE 15 CPU usage on VM

    - by reiniero
    I've set up a single CPU Sybase ASE 15.7 test/hobby/experimentation system on a Debian Squeeze x64 KVM VM. I notice the CPU load goes to 100% and stays there. Definitely not a Sybase guru, only interested to see if some programs I'm running work on the database. Looking at Sybase docs it seems ASE detects the machine is idle and then takes over all processing just waiting for a connection (and if needed, doing some housekeeping apparently). Normally that would be fine but as it is running in a VM it's taking away processor resources other VMs could use - and the increased fan noise of the PC near my desk annoy me. I've tried to remedy this: set the "runnable process search count" parameter from DEFAULT (2000 IRC) to 3 in /opt/sybase/ASE-15_0/SYBASE.cfg from http://sybase.reygrobellet.com/tutorials/install_sybase_vb/standalone04_configure_oralin11#TOC-Configure-kernel I added this to my /etc/init.d/sybase startup script: echo 0 /proc/sys/kernel/randomize_va_space (though I don't think it'll make much difference) How can I tell Sybase to "behave" and not hog the processor - I don't mind reduced performance.

    Read the article

  • Increase application performance on Amazon AWS

    - by Honus Wagner
    I've got a client with an MVC v1 (.NET) application running on a micro instance. On this instance, I've got .NET, IIS 7.5, and MS SQL Server 2008 running to handle the application. The client has reported that it is taking nearly 10 seconds to process each request. Even loading the initial login page takes about that long, then logging in takes that long, etc etc. The currently running instance specs are as follows: 615 MB RAM Intel Xenon CPU E5430 @ 2.66GHz 2.78 GHz 64-Bit Is the memory availability the issue? or is it the processing power? I forsee two options: Change to a larget instance Set up a 2-tier architecture with two micro instances Which of these will give the application better performance? Thanks in advance.

    Read the article

  • Is there a way to create a copy-on-write copy of a directory?

    - by BCS
    I'm thinking of a situation where I would have something that creates a copy of a directory, tweaks a few files, and then does some processing on the result. This wold be done fairly often, maybe a few dozen times a day. (The exact use case is testing patch submissions; dupe the code, patch it, build/test/report/etc.) What I'm looking for could be done by creating a new directory structure and populating it with hard links from the origonal. However this only works if all the tools you use delete and recreate files rather than edit them in place. Is there a way to have the file system do copy-on-write for a file? Note: I'm aware that many FSs use COW at a block level (all updates are done via writes to new blocks) but this is not what I want.

    Read the article

  • Netcat server output with multiple greps

    - by Sridhar-Sarnobat
    I'm trying to send some data from my web browser to a txt file on another computer. This works fine: echo 'Done' | nc -l -k -p 8080 | grep "GET" >> request_data.txt Now I want to do some further processing before writing the http request data to my txt file (involving regex maniuplation). But if I try to do something like this nothing is written to the file: echo 'Done' | nc -l -k -p 8080 | grep "GET" | grep "HTTP" >> request_data.txt (for simplicity of explanation I've used another grep instead of say awk) Why does the 2nd grep not get any data from the output of the first grep? I'm guessing piping with netcat works differently to what I've assumed to get this far. How do I perform a 2nd grep before writing to my txt file? My debugging so far suggests: It is nothing to do with stderr vs stdout Parentheses don't help

    Read the article

  • In SSIS Convert European Currency Format to United States Currency Format

    - by Rob
    I have an interesting problem. I have an SSIS package that processes account data. We are now processing files from Europe. These files are in a CSV format using text qualifiers. For an example of the problem: In the United States the currency format is 123456.99 (We purposely leave the thousands separator out). The files sent from Europe are coming in with two formats. One is 123456,99 and the other is 123.456,00. SSIS is attempting to parse the text file and place it into a NUMERIC(20,2) field. This causes a parsing error in SSIS even with the text qualifiers. If I change the field to CURRENCY it sends a conversion error. I would like for SSIS to deal with this directly without requiring the data to be in the United States format. Has anyone had this problem? Any help will be greatly appreciated. Rob

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • PHP script timed out, or otherwise killed on Apache under CentOS (shared host)

    - by MarkS
    When trying to run a PHP script (CentOS, Apache, PHP 5.2), that may take a long time, it is apparently killed after 45 minutes. PHP script is invoked from a web browser, and in certain situations, it will do a lot of work processing a POP3 mailbox and sending emails as part of an automated monitoring system. Running the PHP script from the command line might be a better option, but I want to understand what is going on so far. I ran a test script, and it appeared to finally give an internal server error (500?) after 45 minutes. Where is this limit set and what is killing the script, if that is what is happening? It's running on a shared host on Hostgator.com.

    Read the article

  • Opscode Chef nginx compile from source issue reports successful run but does nothing

    - by v_abhi_v
    I am trying to install nginx from source in Opscode Chef and its bit weird, it runs complaining nothing but does not install it either. This is how my role attributes look like looks like "nginx":{ "default_site_enabled":false, "version":"1.2.6", "init_style":"init", "install_method":"source", "configure_flags":[ "--without-http_access_module", "--without-http_auth_basic_module", "--without-http_autoindex_module", "--without-http_browser_module", "--without-http_charset_module", "--without-http_fastcgi_module", "--without-http_memcached_module", "--without-http_referer_module", "--without-http_scgi_module", "--without-http_split_clients_module" ], "log_dir":"/var/log/nginx", "binary":"/opt/nginx/sbin/nginx", "source":{ "prefix":"/opt/nginx/dist", "modules":["http_ssl_module", "http_gzip_static_module" ] } }, The chef log shows: [2012-12-19T02:37:44+00:00] INFO: Processing bash[compile_nginx_source] action run (nginx::source line 82) [2012-12-19T02:37:45+00:00] INFO: bash[compile_nginx_source] ran successfully I am clueless on what's going on :(

    Read the article

  • Predictive vs Least Connection Load Balancing Techniques

    - by Mani
    I have a windows based desktop application that communicates via TCP to the application servers. (windows 2003). No sticky sessions between client calls. We have exactly 2 servers to load balance and we are thinking to use a F5 hardware NLB. The application is a heavy load types, doing not much bussiness logic in the services but retrieving quite a big amount of data at most of the times. May be on an average 5000 to 10000 records at all times. Used mainly for storing and retirieving data and no special processing of data or calculations running on the server side. I am favouring 'predictive' considering my services take a while at times to return data and hence tracking the feedback would yield some better routing as in predictive. I am not sure if the given data is sufficient enough to suggest some ideas but considering these, what would be some suggestions\things to consider\best between Predictive and Least Connections ? Thanks.

    Read the article

  • Hide notification area GPO not applying

    - by Richard
    I have created a GPO to hide the notification area on Windows XP SP3. The GPO must apply to all students but only in certain rooms so I've also enabled loopback processing on the GPO and linked to the OUs the computers are in. I've then added a group to the security filter that contains all student accounts. This is not applying. It doesn't even show up in gpresult. I have also tried linking it in the Students OU which contains all student accounts and applying a security filter with a group of the computers I want it to apply to. This didn't work either. It's possible I'm missing something straightforward. Would a WMI filter do the job, and if so how would I go about writing one so that it'll only apply to computers whose name begins with XX-RT for example.

    Read the article

  • samsung HMX-H100P camcorder and video encoding with mencoder

    - by jskg
    Hi everyone, my background is totally not related to video stuff so pardon my newbie style. I own a samsung HMX-H100P camcorder and I'm trying to encode videos to be uploaded to Youtube and Vimeo. First problem: videos generated by the camera with no processing appear like this: http://www.youtube.com/watch?v=AANbl_DTuzE when I play them with Totem(Linux) or VideoLan. Second problem: When I try to encode the videos produced by the camera using mencoder I get the video at the resolution I chose but those ugly lines and lagging are still present. Here's the command I use: mencoder $inputFile -aspect 16:9 -of lavf -lavfopts format=psp -oac lavc -ovc lavc -lavcopts aglobal=1:vglobal=1:coder=0:vcodec=libx264:acodec=libfaac:vbitrate=4500:abitrate=128 -vf scale=1280:720 -ofps 25000/1001 -o $outputFile Any ideas? Thanks in advance

    Read the article

  • NAS that supports NZB downloading for around £150 ($220) or less (without hard drive)

    - by Jigs
    I have seen a number of NAS's that are around that price, but I am worried that they may not be able to handle the processing of .rar files (I know that can be quite CPU intensive). Does anyone have any experiences with sabnzbd or hellanzb - or similar on their NAS? In terms of features the main requirement is NZB downloading - I am quite flexible on the other features. Wifi support would be nice, but not essential. Torrent downloading would also be nice. One disk drive would probably be enough. Easy installation of application would be nice... but again I am sure I can follow a tutorial.

    Read the article

  • Clients not recognizing secondary LDAP groups?

    - by Nick
    I'm having an issue where users who are members of secondary groups in LDAP are not being recognized as members of that group by the client. In this case, user jdoe is not being recognized as a member of the projects group. On the client, getent group shows: projects:*:20001:1001,1002,1003,1004,1005,1006 and getent passwd shows: jdoe:x:1003:10003:John Doe:/home/jdoe:/bin/bash But if I log in to the client as jdoe, and run id, I get: uid=1003(jdoe) gid=10003(jdoe) groups=24(cdrom),25(floppy),29(audio),44(video),46(plugdev),10003(jdoe) It recognizes jdoe's primary group, and the secondary groups that are appended by the client to all LDAP users, but the LDAP secondary groups are not in the list. We can see that jdoe's id is in the projects group, so why is the projects group not showing when jdoe runs the id command? The group objects are basic posixGroup entries, with a memberUid attribute for each of its members. We are using OpenLDAP on Ubuntu 10.04 server and clients.

    Read the article

  • Database Server Hardware components (order of importance), CPU speed VS CPU cache vs RAM vs DISK

    - by nulltorpedo
    I am new to database world and would like to know what are crucial hardware specs when it comes to database performance. I have searched the internet and found this so far (In order of decreasing importance): 1) Hard Disk: Get an SSD basically (much more IOPS than spinners) 2) Memory: Get as much as you can afford 3) CPU: For the same $ spent, prefer larger cache size over speed. Are these findings sensible? EDIT: I would like to focus on CPU speed VS CPU cache size. EDIT2: The database is used to store some combination of ints and int arrays with few text fields. There are a lot of Select queries looking for existing entries. If entry is not found, then insert it. I would say most of processing would be trying to find a match across a table with 200 columns and 20k rows. The insert statements are very few. EDIT3: Also, we have a lot of views (basically select queries).

    Read the article

  • mysql settings - using the available resources

    - by Christian Payne
    I've got a lot of processing work I need to run on a mysql server. I've installed mysql 5.1.45-community on a Win 2007 64bit. Its running on a xenon, 3ghz 6 processors with 8 gig ram. It doesn't seem to matter what queries I run (or the number I run at the same time), when I look in task manager, I'll see one processor is out at 100%. The other 5 are idol. Memory is static at 1.54 gig. When I installed mysql, I used the wizard and selected the default "server" (not workstation) option. I feel like I should be getting more bang for my buck. Is there something else I should be monitoring or something I should change to use the other system resources???

    Read the article

  • Is Flash typically slow on Linux?

    - by CSarnia
    Specifically, I'm running Mint 8 (Helena). I'm extremely new to Linux, and was searching for a solution that was user-friendly and GUI oriented. The box won't be used for much other than web browsing and word processing. Anyway, it runs relatively smoothly, except for Youtube videos... especially full-screen, which runs at like 1 FPS, and even after closing, slows Firefox to a crawl until I restart it. I'd seen an xkcd comic on the matter, but regarded it as a joke until now. Is this actually a problem? Are there any remedies I can try to smooth the applications?

    Read the article

  • What value does SenderID provide over SPF and DKIM?

    - by makerofthings7
    I understand that SPF "binds" a message envelope to a set of permitted IP addresses. SenderID (with the default pra option) "binds" the message header to a set of permitted IPs in addition to the SPF logic. DKIM "binds" the from address header (and any additional header the sender chooses), and the body to a DNS Domain name I'm using the word "bind" above instead of "authorized" because it makes more sense (to me) Questions: If SPF is already verifies a message FROM in the envelope, why is there a need to check the headers? When would the need to verify the envelope (SPF) need to be different than the headers (SenderID) If I'm already verifying the headers with DKIM, why do I need SenderID? Most large companies I've checked don't disable SenderID with an explicit record. EBay is a notable example of one that does. What is the rationale for disabling SenderID "pra" processing of outbound messages?

    Read the article

  • Good HDMI splitter solution

    - by Mehper C. Palavuzlar
    I have a full HD TV which has only 2 HDMI ports on it. Since I have more than 2 devices I connect to TV (e.g. laptop, game console, DVD player), it becomes uncomfortable to plug in and plug out HDMI cables every time I need to use the relevant device. I need a cheap solution to increase the number of my HDMI ports at least to 3. What type of splitter do you recommend? Does the quality of splitter matter, or do they all produce the same audio & video quality?

    Read the article

  • Firefox "auto-complete" is very slow

    - by netvope
    Firefox version: 3.6 My places.sqlite is rather big (114MB, after being optimized by SpeedyFox.) If I turn on auto-complete, it may take 1 or 2 seconds for Firefox to accept a newly typed URL. To reproduce the issue: Type a URL into the URL bar, press enter. Nothing happens, and Firefox consumes 100% CPU (actually 50% of 2 cores) for 1 to 2 seconds Then Firefox start the network connection and load the webpage. Since it consumes 100% CPU, I don't think the bottleneck is the disk. I have some experience with SQLite and I know a 100MB DB is very small. To achieve the delay Firefox must be doing some expensive processing or inefficient queries. The issue does not appear if: auto-complete is turned off, or the URL is frequently used, or a new profile with no history is used Does anyone have any idea how to solve the problem? Should I file this as a bug? I don't want to give up my 100MB history, but I don't want to give up auto-complete either :)

    Read the article

  • In SharePoint, why can I "multiple document upload" a 47,297 byte file, but not a 47,298 byte file?

    - by Jim
    It's strange. I can upload a document named 47k.txt that is 47,297 bytes using the "Multiple Document Upload" feature. However, if I add a single character to the end of the text file, the upload fails. Also, if I rename the file to 47k*x*.txt and try to upload it, it fails. This is the error I get in the SharePoint logs: Category: General Event ID: 8jzm Level: High Message: #90012: An error was encountered while processing files on the server. Try uploading one file at a time by using the single upload page. The same error is reported in a message box on the client side. Does anybody know why this would happen?

    Read the article

  • Strange requests coming from Korean Site

    - by Jim Jeffers
    Lately I've been finding a lot of strange requests like this coming to my rails app: Processing ApplicationController#index (for 189.30.242.61 at 2009-12-14 07:38:24) [GET] Parameters: {"_SERVER"=>{"DOCUMENT_ROOT"=>"http://www.usher.co.kr/bbs/id1.txt???"}} ActionController::RoutingError (No route matches "/browse/brand/nike ///" with {:method=>:get}): It looks like it's automated as I get a lot of them and notice the strange parameters they're trying to send: _SERVER"=>{"DOCUMENT_ROOT"=>"http://www.usher.co.kr/bbs/id1.txt??? Is this something malicious and if so what should I do about it?

    Read the article

  • Understanding this error: apr_socket_recv: Connection reset by peer (104)

    - by matthewsteiner
    So, if I do some benchmarking with apache benchmark (ab), and I use large numbers of requests. Then sometimes in the middle of a test I get this error. I don't even know what it means. So how can I fix it? Or is it just something that will happen if the server gets too many hits anyway? The problem is, if I run 10,000 hits, it'll all run perfectly. If I run it again, it'll get to 4000 and get the error: apr_socket_recv: Connection reset by peer (104) A little about my setup: I have nginx taking static requests and processing dynamic ones to apache. The file in question is served from cache by nginx, so I guess it's probably got to do with how nginx is handling the requests? Ideas?

    Read the article

  • Mini-ITX board for AM3 Athlon X4 600e processor.

    - by Kamil Zadora
    Hello, I am planning to build a PC to control a robotic platform that I am building (about 50% complete). I need more power than ATOM platform could bring, as the robot will need to do on the fly image processing to work as intended. I was considering using Athlon X4 600e as it is rated 45W Maximum output. Probably underclocked it would go lower than 30-35W. I do not I'am at very long battery life, but the 17Ah, 12V battery should keep it running for few hours. My problem is: motherboard. I am space limited so I am looking for a nice mini-itx AM3 motherboard to match the processor. It is hard to find many tests about power usage of the motherboards itself (for example, when using the same processor on different motherboards, test are usualy done in the opposite matter). Could you provide any motherboard examples or suggest what chipset to look for? Thank you in advance.

    Read the article

  • Securely wiping a file on a tmpfs

    - by Nanzikambe
    I have a script that decrypts some data to a tmpfs, the directory is secure (permissions), the machine's swap is encrypted (random key on boot) and when the script is done it does a 35 pass wipe (Peter Gutmann) of the cleartext on the tmpfs . I do this because I'm aware wiping files on a journaling file system is insecure, data may be recovered. For discussion, here're the relevant bits extracted: # make the tmpfs mkdir /mnt/tmpfs chmod 0700 /mnt/tmpfs mount -t tmpfs -o size=1M tmpfs /mnt/tmpfs cd /mnt/tmpfs # decrypt the data gpg -o - <crypted_input_file> | \ tar -xjpf - # do processing stuff # wipe contents find . -type f -exec bcwipe -I {} ';' # nuke the tmpfs cd .. umount -f /mnt/tmpfs rm -fR /mnt/tmpfs So, my question, assuming for the moment that nobody is able to read the cleartext in the tmpfs while it exists (I use umask to set cleartext to 0600), is there any way any trace of the cleartext could remain either in memory or on disk after the snippet above completes?

    Read the article

  • Why does VIM say there is trailing whitespace on this command?

    - by Jesse Atkinson
    I am trying to write a beautify CSS command in vim that sorts and alphabetizes all of the CSS properties as well as checks to see if there is not a space after the colon and inserts one. Here is my code: nnoremap <leader>S :g#\({\n\)\@<=#.,/}/sort | %s/:\(\S\)/: \1/g<CR> :command! SortCSSBraceContents :g#\({\n\)\@<=#.,/}/sort | %s/:\(\S\)/: \1/g These work independently. However, I am trying to pipe them into one command. On save VIM says: Error detected while processing /var/home/jesse-atkinson/.vimrc: line 196: E488: Trailing characters Any ideas?

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >