Search Results

Search found 24117 results on 965 pages for 'write'.

Page 743/965 | < Previous Page | 739 740 741 742 743 744 745 746 747 748 749 750  | Next Page >

  • Nexenta, NFS and LOCK_EX

    - by Givre
    I'm currently using an LAMP architecture and I expect a big problem :( I have several http web server using PHP5. All are mounting via NFS (v3) the directory for all the hosted websites. The file server is running the Nexenta Storage Appliance using ZFS . The problem is all the NFS client trying to write something in a file over the NFS get this problem : This is inside the apache2 process: open("/nfs/website1/file.txt", ORDWR|OCREAT, 0600) = 11647 fstat(11647, {stmode=SIFREG|0600, st_size=23754, ...}) = 0 flock(11647, LOCK_EX And the process never get the LOCK and keep waiting for... always. The effect? All the apache2 procces get used and waiting.. my severs can't still proccess the others requests because there is no more proccess available. I don't now where to find a solution.. for me it.'s on the NFS server side.. but wich configuration is wrong or missing ? How can I find what is wrong? If you need more information about the configuration, just ask me what can help you more :)

    Read the article

  • Nginx Redirect when URL includes variable p=1

    - by ChrisD
    Need to write a small nginx rewrite line(s) to alter/301 redirect some URLs within our existing website. for example: www.example.com.au/pageone.html?p=1 to www.example.com.au/pageone.html www.example.com.au/pagetwo.html?dir=asc&limit=200&order=price&p=1 to www.example.com.au/pagetwo.html?dir=asc&limit=200&order=price www.example.com.au/pagethree.html?dir=dsc&limit=100&order=price&p=1 to www.example.com.au/pagethree.html?dir=dsc&limit=100&order=price As you can see p=1 has been stripped from the URL's (as it is superfluous but has been live on the site and needs to be redirected now) - all http and https links. Basically if, and only if, p=1 is used anywhere within the URL then it should redirect to the same URL without the p=1. This should also let p=11, p=12 through as normal (and not redirect), as it is not specifically p=1. # # # # If that is not possible then, I'd like to know how to redirect this kind of URL as a standalone one off: www.example.com.au/pageone.html?p=1 to www.example.com.au/pageone.html I tried several redirects but they were all pointless and did not work, and was not able to get it working. To be honest I do not really know where to start with this - I am new to nginx.

    Read the article

  • Seagate 3TB hard drive loses format information

    - by Victor Bugarin
    I have a Windows 7x64 Ultimate, 6 GB memory, 1 TB HD. 3TB Barracuda XT HDD. The HDD is installed on a StarTech 4 bays external enclosure I had troubles so I converted to a GPT, created 1 partition and formatted as NTFS. The hard drive I can write and read to and from the hard drive but it will become unreadable at some point while I am copying files or after I have copied files to it. I have copied large Bluray movies and diverse video files, I have also copied 32 GB of pictures, and I have copied about 86 thousand music files in different formats. At some point the partition becomes unreadable and I have to format the partition again (all files lost) and I have to start the whole process again. At some point I have been unable to copy large ISO (Bluray movies) file images. I have partitioned the HDD in 2 partitions P1 - 2TB, P2 - 1TB and I have lost every single file in either partition the same way. I reformat the HDD and it seems fine. I have run seatools to check the hard drive and it reports to be OK. What gives?

    Read the article

  • PassEnv does not find ENV variables

    - by quodlibetor
    I've got this /etc/profile.d/myfile.sh: export MYVAR=myval I also have a PassEnv MYVAR line in a <virtualhost> section of an apache conf dir. That lets me do things like: $ echo $MYVAR myval $ python >>> import os; os.getenv('MYVAR') 'myval' $ sudo echo $MYVAR myval $ sudo -i root# echo $MYVAR myval But then, despite that being the case I get: root# /sbin/service httpd restart /sbin/service httpd restart Stopping httpd: [ OK ] Starting httpd: [Mon Oct 22 14:44:02 2012] [warn] PassEnv variable MYVAR was undefined [ OK ] And all of my attempts to access MYVAR from within my wsgi scripts just don't work. Thoughts? Am I doing something obviously wrong? EDIT for more detail I've got a swarm of computers/VMs and a swarm of developers working on a swarm of projects. I need a simple central place to keep environment information, the most common is the "environment" (dev/stage/prod). The scheme that we've got (modifying *.wsgi programmatically) is turning out to be more fragile than we'd like. The main options that I see are: put things in the shell environment put things in other config files Getting things into the shell environment is the best, because we won't need to write yet more duplicated "what is my environment" code.

    Read the article

  • Arduino IDE "launch 4j" error

    - by John
    I have a computer running Windows XP. I am trying to run the Arduino IDE 0022. I double-click on arduino.exe, it waits about 30 seconds on the load up title screen, and then it gives me this error: Launch 4j: an error occurred while starting the application My only choice is to click "OK"; the error goes away, and the Arduino IDE closes. If I try to delete the Arduino files (to try overwriting with some different files), I get an error that doesn't allow me to do so: Cannot delete awt.dll: Access denied Make sure the disk is not full or write protected and that the file is not currently in use. The only way to delete the file is by restarting the computer. So something must still be trying to run after that first error. I have noticed in Task Manager that some Java programs are still running: javaw.exe (3 processes) I think this is a problem with Java, but I checked and updated all of my Java software and it is all up to date. I have looked on other forums for this issue and none of them seemed to help. From the forums I have tried: Different Arduino IDE versions Updating Java Opening arduino.exe as Administrator Nothing has worked. Anyone have any suggestions?

    Read the article

  • 2 Computers, same network, different outgoing speeds when uploading to internet?

    - by user117339
    I have 2 work machines in my office, a PowerMac G5 and a MacBook Air. Both behind an IPCop firewall. The PowerMac is connected through a gigabit switch, the MacBook Air is connected through a Netgear 802.11g access point that is then plugged into the gigabit switch. There is also a FreeNAS box, both machines are able to read and write files to it at close to their pipe speeds. The main problem is when I am trying to upload files to the internet at large. The G5 is only hitting 0.1 - 0.25 Mbps. The Macbook is able to hit 2-3 Mbps. The setup (G5 / IPCop / Network) has been the same for 5 years. The issues with the internet speed started about 3 months ago. I hadn't tested on the Macbook at this point. I had complained to the ISP, they said their modem needed a firmware update, did that nothing changed. Reset IPCop, turned off squid, etc. No changes. The ISP switched the office over to a better plan with a theoretical 6 Mbps up, still no change. At this point I tried testing the Macbook, and lo and behold there's the speed. But why? I have tried changing out everything, cables, switches, using another ethernet port on the G5, wiping the system, using DHCP, using manual IPs, changing DNS servers, etc. Nothing works. I figured that if there was something horribly wrong with the network, then internally I would find a similar issue, but that is perfect. iperf, ping, etc show no dropped packets and near saturation of the internal network. I'm at a loss as to what the heck is going on. Any ideas would be appreciated! Below are some screenshots of speedtest.net: G5: Macbook Air:

    Read the article

  • Rsync root files between systems without specifying password

    - by xpt
    This seems very tricky to me. I've set up my two systems so that I can rsync files between them as me, without specifying password. Now the the problem is to rsync files that belong to root. On both of my systems, there are no root passwords. The only way to become root is via sudo. So I can neither give a password for sudo rsyn local root@remote:, no use my ssh-agent to supply pass phrase. I don't want to set up a root password on any systems; and I do need the files to be owned by root on both systems. EDIT: Using the files that belong to root is just an example, I need a way for my unprivileged account to read/write system (including root-owned) files easily. One example is to copy my configured /root environment into the freshly-installed system. The two systems are actually two VMs under a single host, so it's not a big concern for me to copy root-owned files between them. EDIT 2: If I only want to copy my configured /root environment into the freshly-installed system, I can use tar: sudo tar cvzf - /root | ssh me@remote sudo tar xvzf - -C / But I do need rsync to update from time to time. Any easy way to make it happen? EDIT 3: Formally formulate the question Alright, it all began with the question, how to rsync files that belong to root between two systems as a normal unprivileged user, without specifying password, under the condition that, The root account is locked on both of systems. I.e., there are no root passwords. The only way to become root is via sudo (recommended security practice, see http://help.ubuntu.com/community/RootSudo) I don't want a completely passwordless sudo but don’t want to be typing passwords all the time either. The normal unprivileged user has entered their ssh pass phrase into the ssh agent. Thanks

    Read the article

  • Deleting old system folders from a drive that is no longer the windows installation drive

    - by grenade
    I dropped my laptop and was no longer able to boot. There were error messages about a corrupt boot record. Replacing the hard drive and reinstalling Win 7 was how I dealt with it. The old drive still appears to be good and I can read and write to it when I connect it as a second drive and mount as D:. However, if I try to recover the space being used by the windows, programdata, program files & program files(x86) folders, by deleting them I get error messages about needing permission from trustedinstaller. If I set myself as the owner of the folders and retry the delete I get error messages about needing permission from myself! Since I'm pretty sure that I have permission from myself to delete the folders, I can only assume that the OS or file system has gotten its panties twisted. I have tried shift, right click, delete from explorer and also if I run "del /f /s /q D:\Windows" from an admin command prompt, I get a succession of Access is denied messages as well. How do I delete D:\Windows, D:\ProgramData, D:\Program Files & D:\Program Files(x86) from a drive that is not the Windows installation drive?

    Read the article

  • Changing the View of a Folder When It Is Opened from Finder

    - by user60044
    When I was running OS 10.4 I had all my folders beautifully organized with position and which ones should be opened in View mode and which one in Icon, etc. When I transitioned to 10.6 I found that OS ignored all that information and imposes an awful consequence of changing the view for some folder and that view moving up to all parent folders which don't have their views locked. The only way I can think of to get this functionality back is to either write a program that given a root directory will enumerate it setting the views based upon a static template I have. The other way I thought I could accomplish this is by Folder Actions. Aside from the fact that I don't know AppleScript it seems folder actions cannot be inherited. Since I have 1000s of folders involved here, all being inherited from a single root, I could not possibly manually add that action to each of those folders. What I would like to have is a folder action that whenever I open the folder from Finder, then if it contains any JPG, GIF, etc. type image files that it will automatically open that folder with a view of Icon with a reasonable size to support the number of images. If the folder contains only folders, then to open that folder in the list view. Does anyone have anything that could do this for me? Thank You, -- Mark

    Read the article

  • Apache https is slow

    - by raucous12
    Hey, I've set apache up to use SSL with a self signed certificate. With https (KeepAlive on), I can get over 3000 requests per second. However, with https (KeepAlive off), I can only get 13 requests per second. I know there is supposed to be a bit of an overhead, but this seems abnormal. Can anyone suggest how I might go about debugging this. Here is the ab log for https: Server Software: Apache/2.2.3 Server Hostname: 127.0.0.1 Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,DHE-RSA-AES256-SHA,4096,256 Document Path: /hello.html Document Length: 29 bytes Concurrency Level: 5 Time taken for tests: 30.49425 seconds Complete requests: 411 Failed requests: 0 Write errors: 0 Total transferred: 119601 bytes HTML transferred: 11919 bytes Requests per second: 13.68 [#/sec] (mean) Time per request: 365.565 [ms] (mean) Time per request: 73.113 [ms] (mean, across all concurrent requests) Transfer rate: 3.86 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 190 347 74.3 333 716 Processing: 0 14 24.0 1 166 Waiting: 0 11 21.6 0 165 Total: 191 361 80.8 345 716 Percentage of the requests served within a certain time (ms) 50% 345 66% 377 75% 408 80% 421 90% 468 95% 521 98% 578 99% 596 100% 716 (longest request)

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

  • Windows 7 product key, which is the valid one - in registry or on a sticker?

    - by me how
    I am not too familiar with the software licensing and how this all works, but I have a question regarding Windows 7 and partially Office - generally Microsoft products. I have been asked to assist our IT guy who wants to collect all the product IDs for Windows 7 and Office. I haven't been given much details how to go about it and how to collect it. After a bit of research I have decided to use a freeware that pulls the software licenses out of the registry. I thought that was the easiest and would provide the most accurate product IDs. I've used Belrac Avisor to obtain all the informations. It turns out that about 25 machines use the same product key. I have asked if the company has bought a commercial license or something but there isn't anyone available at the moment who could answer my question. I have told the IT guy that there are 25 machines using the same product key and asked if that is alright. He told me to go around and write the product keys from the sticker(label) on each machine. I am just not quite sure if that's the right approach specially that the numbers do not match.... So, now I see that the numbers aren't matching and my question is in terms of software licensing which is the VALID and correct product key to provide if ever questioned about software license? Is it the number on the sticker or is it the number stored in the registry?

    Read the article

  • is there a way to run a command before puppet implements a change?

    - by Patrick
    I want to have puppet run a specific command before performing any type of change. I am aware of the prerun_command option in the main puppet.conf, but this is not what I'm looking for. I want the command to only run if something is about to change, not on every puppet run. Here's the scenario. Let's say I have a bunch of web servers behind a load balancer. I then want puppet to update the web site files. But in order to prevent issues where some files have been updated, but other files haven't, and the mixed versions causing problems, I want to take the server out of the load balancer pool. I could write a script which when run will tell the load balancer to remove the box from the pool. Then puppet can do the change, and use postrun_command to put the box back in the pool once complete. But I need a way to run that script to remove the server from the pool. The only solution I can think of is to keep 2 copies of the files on the box. One a staging copy, and when puppet updates that, use a notify action to trigger the removal script, and then copy from staging into the live location. But I was hoping for something a little more generic that would work on any change being performed (upgrading a package, restarting a service, creating a user, anything).

    Read the article

  • Is there a way to do a sector level copy/clone from one hard drive to another?

    - by irrational John
    Without going into distracting details, I'm attempting to duplicate the contents of the 500GB drive in my MacBook to another 500GB drive. But this is turning out to be an unexpected hassle because the drive contains both the OS X partition and an NTFS partition with Win 7 via Apple's Boot Camp. With the exception of Clonezilla, the tools I have looked at so far all have some limitation. The Mac tools don't want to deal with the NTFS partition. The Windows tools are totally clueless about either the HFS+ partition and/or the hybrid MBR/GPT Boot Camp partitioning. Clonezilla looked like it would do what I want but apparently I can't figure out how to use it. After doing what I thought was a sector to sector copy I found that only the NTFS partition had been migrated. The others were apparently empty. (And frankly, I'm not positive Clonezilla migrated the partition table correctly either). Note: It takes over 2 hours using SATA to read/write all sectors with these drives. So I'm not up for using trial & error to narrow in on the right combination of Clonezilla options to use. I'm beginning to think that maybe the answer is to boot Linux (probably Ubuntu) and then use some ancient BSD command. Trouble is I don't know what command (or parameters to use) in order to do a sector level copy from one drive to another. As far as I know the drives have the same number of sectors so this should be trivial. Sigh.

    Read the article

  • after building in more ram, bios/debian does nothing [closed]

    - by derty
    My private server has 2x1gb Ram working with a 64bit Debian and an Q6600 Intel. This runs 2 virtual mashines on it each one recives 512mb RAM. Which you can immagine is a bit tight for the hole system. Now i got 2x2gb ram from a friend. I'm not sure if thery are clocking at the same speed, but i'm sure my power adaptor is not on his limit and can handle that. So there are 2 ram sockets left at the mainboard. I shuted down the system and build in the 4 gigs and looked what happen. After pressing the start button, everything gets noisy as known BUT the screen shows nothing, not even the bios stuff it usally does. Why isn't the system booting? I can immagine that it does not direkt link between booting debian and the bios not showing a thing. Or is it Grub? I mounted the system disk, and i can see it, switch folders write stuff with "vim" it does not seems like there is a problem.

    Read the article

  • NTbackup doesn't complete on system state

    - by Joe Majsterski
    I have a Windows 2003 server that is running a semi-custom backup task. The scheduled task calls NTbackup with a few switches depending on whether it is a full or incremental backup. Most of the time, the NTbackup completes fine, and the wrapper then appends the NTbackup log into its own log before adding a few final comments and completing. The problem I am having is that sometimes, NTbackup seems to just... blank out. It always completes backup of the C: and E: drives, but then it will start the system state and not add any more messages into the event log saying it completed that. And the NTbackup log is left empty, since it doesn't write anything to the log until all the backup tasks are complete. This is causing the wrapper to append no text into its own log. That causes problems for us because we read the information out of that log to determine whether backups are failing. The wrapper task also reports that it is completing normally in the event log. Anyone ever seen a case where system state doesn't complete consistently? To be clear, the server is not logging any error messages anywhere. It's just not seeming to complete or log anything.

    Read the article

  • File permissions on web server

    - by plua
    I have just read this useful article on files permissions, and I am about to implement a as-strict-as-possible file permissions policy on our webserver. Our situation: we have a web server accessed through sftp by different users from within our company, and we have the general public accessing Apache - sometimes uploading files through PHP. I distinguish folders and files by their use. So based on this reading, here is my plan: All people who need to upload files will have separate users. But all of those users will belong to two groups: uploaders, and webserver. Apache will belong to the group webserver. Directories Permission: 771 Owner: user:uploaders Explanation: to access files in the folder, everybody needs to have execute permission. Only uploaders will be adding/removing files, so they also get r+w permission. Files within the web-root Permission: 664 Owner: user:uploaders Explanation: they will be uploaded and changed by different users, so both owner and group need to have w+r permissions. Webserver needs to only read files, so r permission only. Upload-directories Permission: 771 Owner: user:webserver Explanation: when files need to be uploaded, Apache needs to be able to write to this directory. But I figure it is safer to change the owner to webroot, thus giving Apache sufficient privileges (and all uploaders also belong to this group and will have the same permissions), while safeguarding from "others" writing to this folder. Uploaded files Permission: 664 Owner: user:webserver Explanation: after uploading Apache might need to delete files, but this is no problem because they have w+r permission of the folder. So no need to make this file any more accessible than r access for group. Being not an expert on file permissions, my question is whether or not this is the best possible policy for our situation? Any suggestions welcome.

    Read the article

  • Stop Windows 7 from accessing or writing to hard drive unless "told" to by me? (More info inside...)

    - by Jeff
    A confusing question, perhaps, but bear with me. I have two internal HDDs set up in a RAID0 array which I use as mass storage. I access the drive very infrequently (once a day at most) and so I have set up Windows 7's power options to turn off idle disks after only 1 minute. This is fine, and the disks are turned off most of the time. However, I notice that Windows sometimes spins up the drives when I really, really don't want or need it to. This causes a 30 second delay as both drives spin up and lock up my system. Some examples of when this happens: 1) When I'm installing something using Windows Installer or Installshield; it seems to me as if they're using the drive with most available free space as the installer cache location... so my big RAID drive has to spin up! Most annoying. 2) Apparently, when I open a Java-based program which resides on my system drive and has nothing to do with my RAID drive! 3) At boot-up and shut-down time. At shutdown the drive spin up only for the computer to immediately shut down! Incredibly frustrating! I've already tried changing the letter of the drive, and at some points have removed the drive letter entirely, which solves the first two issues above. So my question (FINALLY!) is this: is there any way I can mark this drive as being for "storage only", so Windows basically does not see it at all until I actually invoke it somehow? Or is there any way I could set it up so that only specific programs have write access to it? For example, download managers, TeraCopy, etc. etc.? Basically I want it to be a "ghost drive" until I'm ready to use it and to stop Windows from spinning it up all the damn time! Thank you. :)

    Read the article

  • How expensive is a hostname in htaccess? Other solutions possible?

    - by Nanne
    For easy allow or disallowing of dynamic IP-adresses you can add them as a hostname in a .htaccess file. As I have read from: .htaccess allow from hostname? it does a reverse lookup on the connecting ip address, seeing if the response matches the allowed name. (Well, actually Apache is doing a double lookup, first a reverse lookup and then a forward lookup on the result of the reverse.) This is the reason we are currently not using dynamic-ip hostnames in the .htaccess: this "sounds" quite heavy: 2 extra lookups for every request. Is this indeed quite heavy, and would a reasonably busy server that is rather looking for less then more load get away with this :)? (e.g.: how does this 'load' compare to the rest? If a request is 1000 times more expensive then the lookups it might be negligible. otoh, it could be that final straw :) ) Are there other solutions? I can write a script that does a lookup of the hostname and put it in .htaccess files ofcourse, but this feels a bit like a hack.

    Read the article

  • ffmpeg - creating DNxHD MFX files with alphas

    - by Hugh
    I'm struggling with something in FFMpeg at the moment... I'm trying to make DNxHD 1080p/24, 36Mb/s MXF files from a sequence of PNG files. My current command-line is: ffmpeg -y -f image2 -i /tmp/temp.%04d.png -s 1920x1080 -r 24 -vcodec dnxhd -f mxf -pix_fmt rgb32 -b 36Mb /tmp/temp.mxf To which ffmpeg gives me the output: Input #0, image2, from '/tmp/temp.%04d.png': Duration: 00:00:01.60, start: 0.000000, bitrate: N/A Stream #0.0: Video: png, rgb32, 1920x1080, 25 tbr, 25 tbn, 25 tbc Output #0, mxf, to '/tmp/temp.mxf': Stream #0.0: Video: dnxhd, yuv422p, 1920x1080, q=2-31, 36000 kb/s, 90k tbn, 24 tbc Stream mapping: Stream #0.0 -> #0.0 [mxf @ 0x1005800]unsupported video frame rate Could not write header for output file #0 (incorrect codec parameters ?) There are a few things in here that concern me: The output stream is insisting on being yuv422p, which doesn't support alpha. 24fps is an unsupported video frame rate? I've tried 23.976 too, and get the same thing. I then tried the same thing, but writing to a quicktime (still DNxHD, though) with: ffmpeg -y -f image2 -i /tmp/temp.%04d.png -s 1920x1080 -r 24 -vcodec dnxhd -f mov -pix_fmt rgb32 -b 36Mb /tmp/temp.mov This gives me the output: Input #0, image2, from '/tmp/1274263259.28098.%04d.png': Duration: 00:00:01.60, start: 0.000000, bitrate: N/A Stream #0.0: Video: png, rgb32, 1920x1080, 25 tbr, 25 tbn, 25 tbc Output #0, mov, to '/tmp/1274263259.28098.mov': Stream #0.0: Video: dnxhd, yuv422p, 1920x1080, q=2-31, 36000 kb/s, 90k tbn, 24 tbc Stream mapping: Stream #0.0 -> #0.0 Press [q] to stop encoding frame= 39 fps= 9 q=1.0 Lsize= 7177kB time=1.62 bitrate=36180.8kbits/s video:7176kB audio:0kB global headers:0kB muxing overhead 0.013636% Which obviously works, to a certain extent, but still has the issue of being yuv422p, and therefore losing the alpha. If I'm going to QuickTime, then I can get what I need using Shake, but my main aim here is to be able to generate .mxf files. Any thoughts? Thanks

    Read the article

  • hi, beginner issue with socat [closed]

    - by ams
    the main question is how to begin sending request to server with some data(send request with number) and get data from server?? and second question is how can i solve this simple question (author said) ? In this part, you should write a simple Shell script which receives URL of a website by sending your student number to a server and after creating and sending HTTP request for this URL, receives the desired content. Finally the content should be saved in an HTML file. Steps 1. Connect to port 4000 of the server and send the massage which includes your student number (e.g. 89207704) to the server. Receive the URL in the form of http://www.example.com. Create the HTTP request and send it to the website's server. Receive the content of the URL from the website. Save the content in the HTML file. what i can do? how i begin? thank u all the topology that exercise is speaking about is here Topology is here Is there any easy way to do this?

    Read the article

  • Constructor and Destructor of a singleton object called twice

    - by Bikram990
    I'm facing a problem in singleton object in c++. Here is the explanation: Problem info: I have a 4 shared libraries (say libA.so, libB.so, libC.so, libD.so) and 2 executable binary files each using one another shared library( say libE.so) which deals with files. The purpose of libE.so is to write data into a file and if the executable restarts or size of file exceeds a certain limit it is zipped and a new file is created with time stamp in name. It is using singleton object. It exports a handler class for getting and using singleton. Compressing only happens in the above said two cases. The user/loader executable can specify the starting name of file only no other control is provided by handler class. libA.so, libB.so, libC.so and libD.so have almost same behavior. They all have a class and declare and object of an handler which gets the instance of the singleton in libE.so and uses it for further purpose. All these libraries are linked to two executable binary files. If only one of the two executable runs then its fine, But if both executable runs one after other then the file of the first started executable gets compressed. Debug info: The constructor and destructor of the singleton object is called twice.(for each executable) The object of singleton is a static object and never deleted. The executable is not able to exit/return gives: glibc detected * (exe1 or exe2): double free or corruption (!prev): some_addr * Running with binaries valgrind gives that the above error is due to the destructor of the singleton object. Thanks

    Read the article

  • How can I stop outlook 2003 from crashing?

    - by Xavierjazz
    XP Outlook 2003 keeps crashing, sometimes freezing my whole computer. The STR: Have Outlook 2003 running (with the added "app" LOOKOUT for search and a pop mail as well as MS mail set up. The program loads and displays my reminders. I minimize the reminders. Outlook displays my email list. I have the "Reading pane" set to display right. There is often junk in my junk folder. When I click on the MS mail junk folder, there is sometimes junk with a blank description. Clicking on this to select and delete it is when the program is virtually certain to crash. Often when I reboot the program, the reading pane is again reset to the default, which is "no reading pane". If I change it back and then again click on the message the program often crashes. If I don't set the reading pane but select the message(s), they can be selected and removed. I then set the reading pane and things are okay for a period. This has been going on for some time now. As a part of trying to solve it, I did a deep scan with a number of "root kit" virus-removers. One did find 2 related root kit viruses and removed them. Ram seems okay, HDD shows okay. As I write this I realize that one thing I haven't tried is removing and re-installing LOOKOUT. I will do that now. Any other ideas or even better, solutions, would be most welcome.

    Read the article

  • How to auto syncronize files with network drive on Windows XP?

    - by stephenmm
    Windows XP: I would like to auto synchronize files between a a local drive and a network drive. I am aware of Windows Briefcase but it is very slow and I have to tell it to synchronize. I really like the way Dropbox does there synchronization as it is almost instantaneous. It is very impressive. I would just use Dropbox but I cannot install it on the remote machine. Is there some tool or script I can create that will watch a particular folder for any changes and then sync those changes to the networked drive automatically and nearly instantaneously? CLARIFICATION: I would like this tool/script to to be a daemon that starts when windows starts and continually monitors a folder for any changes to its contents. Once it observes changes in the source or the destination it synchronizes the files that changed (Very similar to the way Dropbox works). I have a good idea about how I would do this in a Perl script and if a tool does not exist that does this I will write it myself in Perl. If someone has already done this can they share the script?

    Read the article

  • Identifying Exchange 2010 regular process that is walking the mailbox database

    - by toongeneral
    I have an Exchange 2010 server running on a SAN-backed platform. The platform does block-level backups based on a snapshot/incremental basis, that only capture changed data. I was surprised to see a regular period of time where the data changes were happening at a high, sustained rate. Due to the way this system works, that can lead to 1.2TB of stored data per month. The regularity implied a scheduled task, but it is not a fixed interval. It is approximately every 26-32hrs. The disks were performing read operations of ~5MB/s and write operations of ~4.5MB/s, for a period of 3-4hrs. The total written data was ~55-60GB. Reading on TechNet, I am wondering if the following is causing this: http://blogs.technet.com/b/exchange/archive/2011/12/14/database-maintenance-in-exchange-2010.aspx#checksumming The somewhat restrictive thing is that the process only happens at most once every 24 hours. I was able to investigate while it was running, finding the following: the process is store.exe it is working on the mailbox database files while running, it is generating .log files (in the mailbox database folder) consistent with database changes the mailbox database is ~60GB in size, which fits with the total data changes on each iteration I have currently switched to a fixed maintenance window, as a test. It's not clear whether this is the cause, as the symptoms fit, but are not conclusive. Does anyone have any suggestions for additional troubleshooting?

    Read the article

< Previous Page | 739 740 741 742 743 744 745 746 747 748 749 750  | Next Page >