Search Results

Search found 7538 results on 302 pages for 'parallel processing'.

Page 172/302 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • oracle sql query to list all the dates of previous month

    - by Suresh S
    Guys i have a requirement to list all the dates of the previous month like below 20101201 20101202 20101203 20101204 20101205 .. .. .. .. .. .. .. .. 20101231 kindly let me know if any better way to do than this query. select TO_CHAR(TRUNC(SYSDATE,'MM')-1,'YYYYMMDD')-(level-1) as EACH_DATE from dual A connect by level < (TO_NUMBER(TO_CHAR(TRUNC(SYSDATE,'MM')-1,'DD'))+1) Also please let me know the problem with this query it says "missing right parenthesis" SELECT /*+ PARALLEL (A,8) */ /*+ DRIVING_STATE */ TO_CHAR(TRUNC(TRUNC(SYSDATE,'MM')-1,'MM'),'MONYYYY') "MONTH", TYPE AS "TRAFF", COLUMN, A_COUN AS "A_COUNT",COST FROM DATA_P B WHERE EXISTS ( select TO_NUMBER(TO_CHAR(TRUNC(SYSDATE,'MM')-1,'YYYYMMDD')-(level-1)) EACH_DATE from dual A connect by level < TO_NUMBER(TO_CHAR(TRUNC(SYSDATE,'MM')-1,'DD')+1) WHERE A.EACH_DATE = B.DATE order by EACH_DATE ASC )

    Read the article

  • App to slice'n'dice video, specifically remove chunks, on a Mac?

    - by Phillip Oldham
    I have a couple of collections of DVD Box-Sets I've ripped to my mac. Now I'd like to sweeten the viewing experience by removing the title sequences and credits so that viewing doesn't mean I have to keep reaching for the remote to skip 30 seconds of annoying music (think watching multiple episodes of Family Guy). If I can find an app that will let me do this reasonably quickly manually that would be great, but it would be perfect if I could dump a load of commands into a file and have everything trimmed while the mac is "inactive". I'm thinking that if I can specify chunks of time to remove from the original file that would be perfect. I had a quick look at importing into iMovie to do it manually and gave up at the "Processing Thumbnails" stage as it said it would be a couple of hours to produce them for a 45min mp4 file, which I can understand at 25fps but I'm not willing to wait, especially when I've got over a week's worth of files. Any suggestions?

    Read the article

  • Is it hard problem?

    - by Lukasz Lew
    I can't solve it: You are given 8 integers: A, B, C representing a line on a plane with equation A*x + B*y = C a, b, c representing another line x, y representing a point on a plane The two lines are not parallel therefore divide plane into 4 pieces. Point (x, y) lies inside of one these pieces. Problem: Write a fast algorithm that will find a point with integer coordinates in the same piece as (x,y) that is closest to the cross point of the two given lines. Note: This is not a homework, this is old Euler-type task that I have absolutely no idea how to approach.

    Read the article

  • What happens if a server never receives the RST packet?

    - by Rob
    Someone recently decided to show me a POC of a new Denial of Service method using SYN/TCP he's figured out. I thought it was complete nonsense, but after explaining to him about SYN-SYN/ACK-RST, he left me speechless. He told me "what if the server you're using to trick into sending the SYN/ACK packets can't receive the RST packet?" I have no idea. He claims that the server will continue trying to send SYN/ACK packets, and that the packetrate will continue to build up. Is there any truth to this? Can anyone elaborate? Apparently, the way it works is this: He spoofs the IP of the SYN packet to the target's IP. He then sends the SYN packet to a handful of random servers They all reply with their SYN/ACK packet to the target IP, of course The target responds with RST, as we know BUT somehow he keeps the target from sending the RST or keeps the random servers from processing it With this, apparently the servers will continue trying to send the SYN/ACK packets, thus producing a somewhat of a "snowball" effect.

    Read the article

  • How can I get a notification from my server if the mail queue stops

    - by Ash
    I am using QMail with Plesk 10 on an Apache server. Occasionally the mail queue stops processing emails - this most recently happenend when an email account got hacked and started sending hundreds of emails. We did not find out about this until a client of ours contacted to say that their emails were not being recieved, so we checked the mail queue and lo and behold the service had stopped. In future I would like to be notified when the mailqueue stops. How can I set something up so the server will run a command whenever the mailqueue stops?

    Read the article

  • Help with output generated by this C code using fork()

    - by Seephor
    I am trying to figure out the output for a block of C code using fork() and I am having some problems understanding why it comes out the way it does. I understand that when using fork() it starts another instance of the program in parallel and that the child instance will return 0. Could someone explain step by step the output to the block of code below? Thank you. main() { int status, i; for (i=0; i<2; ++i){ printf("At the top of pass %d\n", i); if (fork() == 0){ printf("this is a child, i=%d\n", i); } else { wait(&status); printf("This is a parent, i=%d\n", i); } } }

    Read the article

  • Odd SVN Checkout failures occur frequenctly on VMWare virtual machines

    - by snowballhg
    We've recently been experiencing seemingly random SVN checkout failures on our Hudson build system. Google search has failed me; I'm hoping the super user community can help me out :-) We are occasionally receiving the following SVN error when our Hudson build jobs checkout source via the Hudson Subversion plug-in (which uses svn kit): ERROR: Failed to check out http://server/svnroot/trunk org.tmatesoft.svn.core.SVNException: svn: Processing REPORT request response failed: XML document structures must start and end within the same entity. (/svnroot/!svn/vcc/default) svn: REPORT request failed on '/svnroot/!svn/vcc/default' This issue seems to only occur when checking out from our Virtual Machines (Windows XP, Fedora 9, Fedora 12) using Hudson's SVN Plug-in. Systems that use the traditional SVN client seem to work. SVN Server version: 1.6.6 Hudson version: 1.377 Hudson SVN Plugin Version: 1.17 Has anyone dealt with this issue, or have any suggestions? Thanks

    Read the article

  • Archiving Database Tables using Java

    - by HonorGod
    My application demands archiving database tables between sybase and db2 and vice-a-versa and within(db2 to db2 and sybase to sybase) using java. I am trying to understand the best strategies around in terms performance, implementation, ease of use and scalability. Here is my current process - source and destination tables with the acceptable parameters (from java) are defined within xml. the application reads the source and destination configurations and execute them sequentially. destination is sometime optional when source is just deleting data from a specific table or when the source is just calling a stored procedure. dataset between source and destination is extremely large (in millions) From top of my head, it looks like I can define dependencies between multiple source and destination combination and have them execute in parallel in multiple treads. But will this improve any performance(i hope it will)? Are there any open-source frameworks for data archiving using java? Any other thoughts on the implements side will be really helpful. Thanks

    Read the article

  • Why won't Windows use the other CPU cores?

    - by revloc02
    In Windows Task Manager the Performance tab shows the first CPU maxed out, the other 7 just idling along with the occasional spike. What gives? More info: I've got 8GB and only 4.5GB are being used. The Processes tab has no indication of any process hogging processing power. In fact System Idle Process is 98-99. When I program stuff and have like 8 to 12 applications going (several directly unrelated to programming of course) my computer slows to a crawl. Sysyem Info: Intel Core i7-2600K Processor (quad-core with hyper-threading), 8GB RAM, Intel BOXDZ68BC LGA 1155 Motherboard, 500GB HDD

    Read the article

  • How fast are App Engine db.get(keys) and A.all(keys_only=True).filter('b =', b).fetch(1000)?

    - by Liron Shapira
    A db.get() of 50 keys seems to take me 5-6 seconds. Is that normal? What is the time a function of? I also did a A.all(keys_only=True).filter('b =', b).fetch(1000) where A.b is a ReferenceProperty. I did 50 such round trips to the datastore, with different values of b, and the total time was only 3-4 seconds. How is this possible? db.get() is done in parallel, with only one trip to the datastore, and I would think that looking up an entity by key is a faster operation than fetch.

    Read the article

  • Java or Python distributed compute job (on a student budget)?

    - by midget_sadhu
    I have a large dataset (c. 40G) that I want to use for some NLP (largely embarrassingly parallel) over a couple of computers in the lab, to which i do not have root access, and only 1G of user space. I experimented with hadoop, but of course this was dead in the water-- the data is stored on an external usb hard drive, and i cant load it on to the dfs because of the 1G user space cap. I have been looking into a couple of python based options (as I'd rather use NLTK instead of Java's lingpipe if I can help it), and it seems distributed compute options look like: Ipython DISCO After my hadoop experience, i am trying to make sure i try and make an informed choice -- any help on what might be more appropriate would be greatly appreciated. Amazon's EC2 etc not really an option, as i have next to no budget.

    Read the article

  • python unit testing os.remove fails file system

    - by hwjp
    Am doing a bit of unit testing on a function which attempts to open a new file, but should fail if the file already exists. when the function runs sucessfully, the new file is created, so i want to delete it after every test run, but it doesn't seem to be working: class MyObject_Initialisation(unittest.TestCase): def setUp(self): if os.path.exists(TEMPORARY_FILE_NAME): try: os.remove(TEMPORARY_FILE_NAME) except WindowsError: #TODO: can't figure out how to fix this... #time.sleep(3) #self.setUp() #this just loops forever pass def tearDown(self): self.setUp() any thoughts? The Windows Error thrown seems to suggest the file is in use... could it be that the tests are run in parallel threads? I've read elsewhere that it's 'bad practice' to use the filesystem in unit testing, but really? Surely there's a way around this that doesn't invole dummying the filesystem?

    Read the article

  • Condor job using DAG with some jobs needing to run the same host

    - by gurney alex
    I have a computation task which is split in several individual program executions, with dependencies. I'm using Condor 7 as task scheduler (with the Vanilla Universe, due do constraints on the programs beyond my reach, so no checkpointing is involved), so DAG looks like a natural solution. However some of the programs need to run on the same host. I could not find a reference on how to do this in the Condor manuals. Example DAG file: JOB A A.condor JOB B B.condor JOB C C.condor JOB D D.condor PARENT A CHILD B C PARENT B C CHILD D I need to express that B and D need to be run on the same computer node, without breaking the parallel execution of B and C. Thanks for your help.

    Read the article

  • Outbound Traffic Logging on ASA 5520 possible?

    - by j2k4j
    Taking a look at the ASDM (6.4) for my ASA 5520, I get a nice summary of the traffic status, with items like "interface traffic usage", and "connections per second". This works well, but only shows the data for the last 5-6 minutes or so. Recently, I've been asked whether it's possible to pull up this same type of traffic data for a particular time in the past. (Such as: Find the traffic usage for a 3 minute period from date xx:xx:xx @ time xx:xx:xx) I've noticed that my ASA 5520 is logging the warning, errors, etc that it is processing. But traffic data is not logged (yet) according to my search through the ASA. Is logging the traffic data amounts (as wondered above) actually a possibility? Is there any way to find out the past data for traffic and such values? Thanks!

    Read the article

  • "Cannot use fixed local inside lambda expression"

    - by JulianR
    I have an XNA 3.0 project that compiled just fine in VS2008, but that gives compile errors in VS2010 (with XNA 4.0 CTP). The error: Cannot use fixed local 'depthPtr' inside an anonymous method, lambda expression, or query expression depthPtr is a fixed float* into an array, that is used inside a Parallel.For lambda expression from System.Threading. As I said, this compiled and ran just fine on VS2008, but it does not on VS2010, even when targeting .NET 3.5. Has this changed in .NET 4.0, and even so, shouldn't it still compile when I choose .NET 3.5 as the target framework? Searching for the term "Cannot use fixed local" yields exactly one (useless) result, both in Google and Bing. If this has changed, what is the reason for this? I can imagine capturing a fixed pointer-type in a closure could get a bit weird, is that why? So I'm guessing this is bad practice? And before anyone asks: no, the use of pointers is not absolutely critical here. I would still like to know though :)

    Read the article

  • What factors could cause the scalability issue on a 10-core CPU?

    - by JackWM
    I am tuning the performance of parallel Java programs. And want to check the impacts from the Architecture. I'm look into the Intel 10-core CPU, Intel(R) Xeon(R) CPU E7-L8867. I found my program only scales up to 5 cores. What could be the causes? I'm considering the Architecture effects. e.g. memory contention? More specifically, Are the 10 cores symmetric to each other? How many memory controllers does it have?

    Read the article

  • How to run Rails 3 application on localhost/<my_port> ?

    - by Misha Moroshko
    To run Rails application on Windows I do: cd < app_dir rails server I see the following: => Booting WEBrick => Rails 3.0.1 application starting in development on http://0.0.0.0:3000 => Call with -d to detach => Ctrl-C to shutdown server [2011-01-12 20:32:07] INFO WEBrick 1.3.1 [2011-01-12 20:32:07] INFO ruby 1.9.2 (2010-08-18) [i386-mingw32] [2011-01-12 20:32:07] INFO WEBrick::HTTPServer#start: pid=5812 port=3000 Question 1 Why port 3000 is selected ? Where is it configured ? Question 2 How could I run 2 applications in parallel ? I guess I need to configure one of them to be on other port (like 3001). How should I do this ?

    Read the article

  • How to block null/blank user-agents in IIS 7.5

    - by Jeremy
    We are going through a large scale DDOS attack, but it isn't the typical bot-net that our Cisco Guard can handle, it is a BitTorrent attack. This is new to me, so I am unsure how to stop it. Here are the stats IIS is processing between 40 and 100 requests per second from BitTorrent clients. We have about 20% of the User Agents, but the other 75% are blank. We want to block the blank user agents at the server level. What is the best approach?

    Read the article

  • ConfigurationErrorsException when serving images via UNC on IIS6

    - by Mark Richman
    I have a virtual directory in my web app which connects to a Samba share via UNC. I can browse the files via Windows Explorer without issue, but my web app throws a yellow screen with the following message: Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: An error occurred loading a configuration file: Could not find file '\cluster\cms\qa-images\120400\web.config'. What makes no sense to me is why it's looking for a web.config in that location. I know it's not an authentication issue because the virtual directory can serve images from its root (i.e. \cluster\cms\qa-images\test.jpg serves as http://myserver/upload/test.jpg just fine).

    Read the article

  • Pyserial : How to send data to drive SIPO

    - by bino oetomo
    Dear All .. I'm learning to drive a stepper motor with Python. It's hard now to find a PC with paralel port. So My plan is using a USB-Serial .. and a SIPO (serial in parallel out) shift register circuit. As you know with this circuit we need to send a binary data in series and this data will be stored in it's register. Next we need to send another one pulse to make it shift the data out to the out-port. How to do it using pyserial ? Sincerely -bino-

    Read the article

  • SharePoint (2010) - Can't delete Service Application

    - by Chris W
    The search service application in our farm went bonkers complaining that it couldn't connect to itself. After multiple people fiddling to try and fix it we've ended up with two search application. The new one, which is working perfectly, and the original one which is very unhappy. I've tried deleting the original Search App in Central Admin but it just won't go - it sits on the screen saying "Processing" but it never completes regardless of how long it is left. There's lot's happening in the logs but I can't really decipher exactly why this isn't working. Things are working fine within the farm but I'd ideally like to clean up this old application if possible. Are there any other options like deleting it with stsadm? I've had a dig but can't seem to find the commands to enumerate the service applications and then delete the correct one.

    Read the article

  • Alternatives to y4mstabilizer; deshaking video

    - by Vi
    "Deshaking" means fixing the video captured from camera hold in hands. Is there open source video deshaker apart from y4mstabilizer from mjpegtools? Patch for mencoder is preferred. My current command line for processing video looks like: mplayer video_from_camera.avi -nosound -vo yuv4mpeg:file=/dev/stdout -really-quiet | y4mstabilizer -n -a 0.8 -r 30 -s 100 | mplayer -cache 1000 /dev/stdin -noconsolecontrols -vf crop=500:380:70:50,denoise3d=3:3:5:5 -vo yuv4mpeg:file=temporary.yuv y4mstabilizer is itself very unstable and often crashes (and it didn't work at all until I have patched memory allocation in it).

    Read the article

  • Moving cpanel backup of magento site to VPS

    - by user2564024
    I was having my site in shared hosting, I took the entire backup, its structure is like addons homedir mysql resellerpackages suspendinfo bandwidth homedir_paths mysql.sql sds userconfig counters httpfiles mysql-timestamps sds2 userdata cp locale nobodyfiles shadow va cron logaholic pds shell vad digestshadow logs proftpdpasswd ssl version dnszones meta psql sslcerts vf domainkeys mm quota ssldomain fp mma resellerconfig sslkeys has_sslstorage mms resellerfeatures suspended Now I have subscribed to vps, I have copied the files inside homedir/public_html to var/www/html of my new hosting, but am seeing the following error when I view it browser, There has been an error processing your request Exception printing is disabled by default for security reasons. Error log record number: 259343920016 I have just created database with name magenhto inside mysql. Previously I had cpanel and used one click installer. Hence am not aware of how to use that data inside mysql to this new system and are there any more changes.

    Read the article

  • Is there a way to create a copy-on-write copy of a directory?

    - by BCS
    I'm thinking of a situation where I would have something that creates a copy of a directory, tweaks a few files, and then does some processing on the result. This wold be done fairly often, maybe a few dozen times a day. (The exact use case is testing patch submissions; dupe the code, patch it, build/test/report/etc.) What I'm looking for could be done by creating a new directory structure and populating it with hard links from the origonal. However this only works if all the tools you use delete and recreate files rather than edit them in place. Is there a way to have the file system do copy-on-write for a file? Note: I'm aware that many FSs use COW at a block level (all updates are done via writes to new blocks) but this is not what I want.

    Read the article

  • What to use to wait on a indeterminate number of tasks?

    - by Scott Chamberlain
    I am still fairly new to parallel computing so I am not too sure which tool to use for the job. I have a System.Threading.Tasks.Task that needs to wait for n number number of tasks to finish before starting. The tricky part is some of its dependencies may start after this task starts (You are guaranteed to never hit 0 dependent tasks until they are all done). Here is kind of what is happening Parent thread creates somewhere between 1 and (NUMBER_OF_CPU_CORES - 1) tasks. Parent thread creates task to be run when all of the worker tasks are finished. Parent thread creates a monitoring thread Monitoring thread may kill a worker task or spawn a new task depending on load. I can figure out everything up to step 4. How do I get the task from step 2 to wait to run until any new worker threads created in step 4 finish?

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >