Search Results

Search found 15301 results on 613 pages for 'global assembly cache'.

Page 364/613 | < Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >

  • Disable "Print and Close" Preview Shortcut on OS X Mountain Lion (10.8.2)

    - by Twitch_City
    I recently upgraded to Mountain Lion from Leopard and there is a global shortcut that I would really like to disable. Apparently, now if you select a document and press Command + p, Preview will open, print the document, then close. I'm sure this was done with the purest intentions but, unfortunately, it is right next to Command + o, which I use many times a day to open files. Since installing Mountain Lion, I've accidentally printed way too many documents just trying to open them! I've gone into the Keyboard preference pane and tried to find where this shortcut is located, but haven't had any success. Anyone know how I can disable this?

    Read the article

  • SAMBA and Linux ACLs -- "Permission denied" on write to share but file written nevertheless

    - by MCH
    I set up a writable share directory "/home/net/share" with acl like this: sudo mkdir -p "/home/net/share" sudo setfacl -m "u:localuser:rwx,u:remoteuser:rwx,g:users:rwx" "/home/net/share" My /etc/samba/smb.conf looks like this: [global] workgroup = w server string = server security = user load printers = no log file = /var/log/samba/%m.log max log size = 50 dns proxy = no printing = bsd printcap name = /dev/null disable spoolss = yes encrypt passwords = true invalid users = nobody root follow symlinks = yes wide links = yes [share] comment = Writable by localuser and remoteuser path = /home/net/share valid users = remoteuser read only = no public = no printable = no Locally, localuser and remoteuser have user accounts and smbpasswds and can both read, create and delete files in /home/net/share. But when I log on from a different machine (like this: sudo mount -t cifs //server/share mountpoint/ -o username=remoteuser ), I get "Permission denied" both when trying to create directories and files, oddly though, it does create files (not directories!) despite these messages! How can I get this working?

    Read the article

  • Xen Disk Performence Issues

    - by user98651
    I'm currently using Xen PV on CentOS 5 with my domU's as flat files running on a hardware RAID controlled (write cache enabled) formatted with XFS. On the dom0 I can get about 500MB/s in a 2GB dd write from /dev/zero however on the domU's I'm lucky if I get 10MB/s (it is usually around half that). I've tried changing the disk scheduling to NOOP on the domU's, changed some mount parameters and tweaked the performance allocations of both the dom0 (prioritize CPU) and domU's (increase RAM and VCPU allocations). None of these steps have produced any noticeable change in performance. My instinct here is that it is not a hardware problem, due to the solid performance of the dom0. Any ideas on what might be causing this problem? I'm considering moving to LVM based domU's.

    Read the article

  • Ping6 fail on linux

    - by michelemarcon
    I have 2 linux box configured with IPv4. I have tried adding IPv6 to them. I have issued this commands on box1: ip -6 addr add fd32:2d7f:f3c1::1/48 dev eth0 And I get this: inet6 addr: fd32:2d7f:f3c1::1/48 Scope:Global Then I have issued this command on box2: ip -6 addr add fd32:2d7f:f3c2::1/48 dev eth0 Back on box1 (command/response): ping6 fd32:2d7f:f3c1::1 is alive! ping6 fd32:2d7f:f3c2::1 ping6: sendto: Network is unreachable Why doesn't box1 ping box2 (of course, also box2 can't ping box1)?

    Read the article

  • When exchange receives message, send notification to another address

    - by Rick Kierner
    I have an internal exchange 2010 server that receives no outside email and sends no email outside. I'd like to send a notification email to an outside email when a user receives a message on the internal exchange server. This message would simply say "A Message has been sent to your XYZ Email, go check it" Theoretically, the AD would have an external email address associated with the user's AD Account. I'm hoping that a process could be triggered when an email is received on this exchange server and I could take some type of action to look up the AD account for the recipient, grab the external email address and send a standard email to that user. This would be a global rule for all exchange accounts. The problem is that I don't know where to start. Thanks!

    Read the article

  • Cached css/javascript files on Sun Java System Web Server

    - by Derp
    I'm doing front-end web development in a Solaris 10 / Sun Java System Web Server 7.0U2 environment. I have noticed that changes to static css or javascript files often do not take effect immediately, whereas changes to static html files always do. My best guess is that a default setting in the web server causes it to cache certain file types in order to provide reasonable performance out of the box. I don't have the admin server running--I'll need to edit the config files by hand. What change(s) can I make so that all of my css and javascript edits take effect immediately? Thanks!

    Read the article

  • I want to use blur function instead mouseup function

    - by yossi
    I have the Demo Table which I can click on the cell(td tag) and I can change the value on it.direct php DataBase. to do that I need to contain two tags.1 - span. 2 - input. like the below. <td class='Name'> <span id="spanName1" class="text" style="display: inline;"> Somevalue </span> <input type="text" value="Somevalue" class="edittd" id="inputName1" style="display: none; "> </td> to control on the data inside the cell I use in jquery .mouseup function. mouseup work but also make truble. I need to replace it with blur function but when I try to replace mouseup with blur the program thas not work becose, when I click on the cell I able to enter the input tag and I can change the value but I can't Successful to Leave the tag/field by clicking out side the table, which alow me to update the DataBase you can see that Demo with blur Here. what you advice me to do? $(".edittd").mouseup(function() { return false; }); //************* $(document).mouseup(function() { $('#span' + COLUME + ROW).show(); $('#input'+ COLUME + ROW ).hide(); VAL = $("#input" + COLUME + ROW).val(); $("#span" + COLUME + ROW).html(VAL); if(STATUS != VAL){ //******ajax code //dataString = $.trim(this.value); $.ajax({ type: "POST", dataType: 'html', url: "./public/php/ajax.php", data: 'COLUME='+COLUME+'&ROW='+ROW+'&VAL='+VAL, //{"dataString": dataString} cache: false, success: function(data) { $("#statuS").html(data); } }); //******end ajax $('#statuS').removeClass('statuSnoChange') .addClass('statuSChange'); $('#statuS').html('THERE IS CHANGE'); $('#tables').load('TableEdit2.php'); } else { //alert(DATASTRING+'status not true'); } });//End mouseup function I change it to: $(document).ready(function() { var COLUMES,COLUME,VALUE,VAL,ROWS,ROW,STATUS,DATASTRING; $('td').click(function() { COLUME = $(this).attr('class'); }); //**************** $('tr').click(function() { ROW = $(this).attr('id'); $('#display_Colume_Raw').html(COLUME+ROW); $('#span' + COLUME + ROW).hide(); $('#input'+ COLUME + ROW ).show(); STATUS = $("#input" + COLUME + ROW).val(); }); //******************** $(document).blur(function() { $('#span' + COLUME + ROW).show(); $('#input'+ COLUME + ROW ).hide(); VAL = $("#input" + COLUME + ROW).val(); $("#span" + COLUME + ROW).html(VAL); if(STATUS != VAL){ //******ajax code //dataString = $.trim(this.value); $.ajax({ type: "POST", dataType: 'html', url: "./public/php/ajax.php", data: 'COLUME='+COLUME+'&ROW='+ROW+'&VAL='+VAL, //{"dataString": dataString} cache: false, success: function(data) { $("#statuS").html(data); } }); //******end ajax $('#statuS').removeClass('statuSnoChange') .addClass('statuSChange'); $('#statuS').html('THERE IS CHANGE'); $('#tables').load('TableEdit2.php'); } else { //alert(DATASTRING+'status not true'); } });//End mouseup function $('#save').click (function(){ var input1,input2,input3,input4=""; input1 = $('#input1').attr('value'); input2 = $('#input2').attr('value'); input3 = $('#input3').attr('value'); input4 = $('#input4').attr('value'); $.ajax({ type: "POST", url: "./public/php/ajax.php", data: "input1="+ input1 +"&input2="+ input2 +"&input3="+ input3 +"&input4="+ input4, success: function(data){ $("#statuS").html(data); $('#tbl').hide(function(){$('div.success').fadeIn();}); $('#tables').load('TableEdit2.php'); } }); }); }); Thx

    Read the article

  • SharePoint Returning a 401.1 for a Specific User/Computer

    - by Joe Gennari
    We have a SharePoint Services 3.0 site set up supporting about 300 users right now. This report is isolated and has never been duplicated. We have one AD user who cannot log into the SharePoint site with his account from his machine and is subsequently returned a 401.1 error. If any other user tries to log on with their account from his machine, it works okay. If he moves to another machine and logs on, it works okay. The only solution to this point has been to install FireFox on the machine. When he authenticates with FF, everything is okay. Remedies tried so far: Cleared cookies/cache Turned off/on Integrated Windows Authentication in IE Downgraded IE 8 to IE 6 Removed site from Intranet Sites zone Renamed the machine Disjoined/Rejoined Domain

    Read the article

  • smartOS HPC config suggestion

    - by Andrew B.
    I'm configuring a brand new HPC server and am interested in using SmartOS because of it's virtualization control and zfs features. Does this configuration make sense for a SmartOS HPC, or would you recommend an alternative? System Specs: 2x 8-core xeon 384 GB RAM 30 TB HDs with 2x512GB SSDs Uses: - zfs for serving data to different vms, and over the network; 1 SSD for L2ARC and 1 for ZIL - typically 1-2 ubuntu instances running R and custom C/C++ code My biggest concerns as a newbie to SmartOS and ZFS are: (1) will I get near-metal performance from ubuntu running on SmartOS if it is the only active vm? (2) how do I serve data from the global zfs pool to the containers and other network devices?

    Read the article

  • nginx caching per user agent

    - by Tuinslak
    I'm currently using nginx as reverse proxy with caching enabled. However, the main site has two different layouts, depending on the user-agent (mobile or not). I've tried something similar to this: # mobile users if ($http_user_agent ~* '(iPhone|iPod|mobile|Android|2.0\ MMP|240x320|AvantGo|BlackBerry|Blazer|Cellphone|Danger|DoCoMo|Elaine/3.0|EudoraWeb|hiptop|IEMobile)') { set $iphone_request '1'; } if ($iphone_request = '1') { proxy_cache mobile; } if ($iphone_request = '') { proxy_cache site; } proxy_cache_key "$scheme://$host$request_uri"; proxy_pass http://real-site.tld; However, nginx gives an error, stating proxy_cache can't be used in an if-structure. Any other way to serve from a different cache depending on the browser? Thanks, Tuinslak

    Read the article

  • Server downtime - are these APC warnings the cause?

    - by DisgruntledGoat
    Yesterday I had a problem with my dedicated server (Ubuntu 10.04, LAMP). It wasn't down per se, but running incredibly slowly as if we had a massive overload of visitors (though I don't think we did). It's running smoothly again now. I've been checking through log files etc to see if I can find any issues, the only strange thing is a bunch of these errors, occurring at about the same time as the downtime: [apc-warning] Unable to allocate memory for pool. in [file] on line 49. And a bit later on: [apc-warning] GC cache entry '[file1]' (dev=2056 ino=8988092) was on gc-list for 3601 seconds in [file2] on line 746. Could these errors indicate the cause of the server slowdown, or are they simply a result of the server being slow in the first place? What would be the solution?

    Read the article

  • Need solutions in sharing a 3Mb/768Kbps DSL line to 60+ users and faster bandwidth

    - by elistp
    Two parts. Part 1: We currently have 2 DSL Lines with 3Mb/768Kbps speeds load balanced for 60+ users. Accessing the Internet is borderline unusable. The simple solution would be to get a faster DSL Line but the highest DSL package is 6Mb/768Kbps, has quite the price jump, and doesn't do anything to help with upload speeds. I'm looking for free or extremely low cost solutions (web cache, traffic shaping, bandwidth controls, etc) to help with making Internet access more bearable until the next funding year. Can anyone give any advice? Part 2: We're looking into a 4.5Mb bonded T1 in the next funding year which is of course significantly more expensive than 2 DSL lines. Are bonded T1s our only hope for faster speeds? Are there any better alternatives?

    Read the article

  • Access Configuration Page of Modem (in bridge mode) through router

    - by Ujjwal Singh
    Given the Network Configuration: Internet (121.243.x.y/27) | (121.243.x.z) Static : Public Global IP Modem Bridge Mode | WiMAX (192.168.1.1/24) +169.254.1.1/24 : Modem Configuration Page | (192.168.1.2) Router DLink DIR 615 | Ethernet + WiFi (192.168.0.1/24) | Local network (192.168.0.2) Workstation Ethernet | no WiFi Is there any way, maybe using Routing Tables, to access the Modem configuration page at 169.254.1.1 from my local network, using a Windows 7 PC? Note that the modem is currently able to display its configuration page at 169.254.1.1, i.e. even while it is in bridge mode.

    Read the article

  • Apache LocationMatch throws 500 and AddOutputFilterByType does nothing

    - by tackleberry
    I need to add below directives to apache. But I get 500 when I add these lines. <LocationMatch "^/assets/.*$"> Header unset ETag FileETag None # RFC says only cache for 1 year ExpiresActive On ExpiresDefault "access plus 1 year" </LocationMatch> Additionally response is not gzipped when I add: AddOutputFilterByType DEFLATE text/html text/css application/javascript application/x-javascript Apache version is: Server version: Apache/2.2.22 (Unix) App: rails 3.2 app When I checked response&request for gzip problem, I see that browser requested gzip: Accept-Encoding gzip, deflate but response not gzipped.

    Read the article

  • Hide directory contents from showing when accessing the URL directly

    - by SoLoGHoST
    On my site, if you browse to http://example.com/images/ the contents of the entire directory are shown like so: How can I make it so that this doesn't show up when people browse directly to http://example.com/images/? Can I create an .htaccess file in that directory? Or is there a better way? I really don't want people being able to do this for the entire site (i.e. every directory on that site). What can I do to prevent this? I figure it's either something that has to be done in Apache or using an global .htaccess file and placing it in the public_html folder perhaps? EDIT I diverted this using an index.php file, but I still feel that security is an issue here, how can I fix this permanently?

    Read the article

  • Samba / smbd on Centos 6.5

    - by Satalink
    I've installed Samba4 and have the smb.conf file as follows: [global] workgroup = WORKGROUP server string = Samba Server realm = REXIALO.COM netbios name = REXIALO.COM security = user map to guest = Bad Password bind interfaces only = no interfaces = lo venet0 log file = /var/log/samba/samba.log max log size = 1000 [webroot] path = /usr/local/apache/htdocs comment = Example.com webroot directory read only = No I can connect from the same server with smbclient. Localhost: # smbclient -L localhost -U root Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Sharename Type Comment --------- ---- ------- webroot Disk RexiAlo webroot directory IPC$ IPC IPC Service (RexiAlo Samba Server) Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Server Comment --------- ------- Workgroup Master --------- -------Enter root's password: network: # smbclient -L rexialo.com -U Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Sharename Type Comment --------- ---- ------- webroot Disk RexiAlo webroot directory IPC$ IPC IPC Service (RexiAlo Samba Server) Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.11] Server Comment --------- ------- Workgroup Master --------- ------- The problem is when I try to map to the smb webroot from Windows 7, it asks for user/pass but just times out and then prompts for credentials. The samba.log file does not show any activity other than the startup of the smbd process. Any help would be appreciated.

    Read the article

  • Possible iphone animation timing/rendering bug?

    - by David
    Hi all, I have been working on an iphone apps for several weeks. Now I encounter an animation problem that I can't figure out how to resolve. Mayhbe you can help. Here is the details (a little long, bear with me): Basically the effect I want to achieve is, when user click a button, a loading view pops up, hiding the whole screen; and then the apps does a lot of heavy computation, which takes a few seconds. Once the computation is done, soem result views (something likes checkers on a checker board) are rendered under the loading view. Once all result views are rendered, I used animation animation to remove the loading view nand show the result views to the user. Here is what I do: when user click a button, run this code: [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:1.0]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationTransition:UIViewAnimationTransitionCurlDown forView:self.view cache:YES]; [UIView setAnimationDelegate:self]; [UIView setAnimationDidStopSelector:@selector(loadingViewInserted:finished:context:)]; // use a really high index number so it will always on top [self.view insertSubview:loadingViewController.view atIndex:1000]; [UIView commitAnimations]; In the "loadingViewInserted" function, it calls another function doing the heavy computation work. Once the computation is done, a lot of result views (like checkers on a checker board) are rendered under the loading view. for(int colIndex = 1; colIndex <= result.columns; colIndex++) { for(int rowIndex = 1; rowIndex <= result.rows; rowIndex++) { ResultView *rv = [ResultView resultViewWithData:results[colIndex][rowIndex]]; [self.view addSubview:rv]; } } Once all result views are added, following animation is invoked to remove the loading view: [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:1.0]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationTransition:UIViewAnimationTransitionCurlUp forView:self.view cache:YES]; [loadingViewController.view removeFromSuperview]; [UIView commitAnimations]; By doing this, most of the time (maybe 90%) it does exactly what I want. However, sometime I see some weird result: the loading view shows up first as expected, then before it disappears, some result views, which suppose to be under the loading view, suddenly appears on top of the loading view; and some of them are partial rendered. And then the loading view curled up, and everything looks normal again. The weird situation only lasts for less than a second, but already bad enough to screw up the UI. I have tried all different kinds of thing to fix this (using another thread to remove the loading view, make the loading view non-transparent), but none of them works. The only thing that makes a little better is, I hide all the result views first; after the last animation finished, in its call back, unhide all result views. But this loses the nice effect that when curling up the loading view, the results are already there. At this point, I really think this is a bug in iphone (I compile it with OS 3.0) OS. Or maybe you can point out what I have done wrong (or could do differently). (thanks for finishing this long post, :-) )

    Read the article

  • Timeout ssh sessions after inactivity?

    - by Insyte
    PCI requirement 8.5.15 states: "If a session has been idle for more than 15 minutes, require the user to re-enter the password to re-activate the terminal." The first, and most obvious, way to deal with ssh sessions that are idling at the bash prompt is by enforcing a read-only, global $TMOUT of 900. Unfortunately, that only covers sessions sitting at the bash prompt. The spirit of the PCI spec would also require killing sessions running top/vim/etc. I've considered writing a */1 cron job that parses the output of "/usr/bin/w" and kills the associated shell, but that seems like a blunt instrument. Any ideas for something that would actually do what the spec requires and just lock the terminal? I've looked at away and vlock; they both seem great for voluntarily locking your terminal, but I need a cron/daemon task that will enforce locking.

    Read the article

  • How to get the spec of a machine on Linux?

    - by machinePurchaser
    I am interested in getting the spec of a machine, because I am thinking getting a similar server. What I am mostly interested in knowing is the number of cores / CPUs / etc., the amount of memory, the speed of the CPUs, the CPU cache size, and any other detail which is important for performance. My question is two-fold: Which parameters should I be interested in other than the ones I specified above? Is there an easy way to read them off the machine in Linux? cat /proc/cpuinfo reveals a lot about the CPUs, for example... What about memory (would rather not rely on top), etc?

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • Minimal backup for Windows 7 system recovery

    - by JIm
    There might not be an answer to this, but for a home Win7 system, what files/directories must be backed up to recover after a windows crash? I can reinstall software, and I keep data files elsewhere. When I use acronis home backup software to backup my "critical" files it seems to choose the entire partition. Updates are mostly browser cache files and the like. Or, after a crash, should I just reinstall windows. I dread the hours of windows updates that would require. Thanks.

    Read the article

  • When Exchange receives a message, send notification to another address

    - by Rick Kierner
    I have an internal Exchange 2010 server that receives no outside email and sends no email outside. I'd like to send a notification email to an outside email when a user receives a message on the internal Exchange server. This message would simply say "A Message has been sent to your XYZ Email, go check it" Theoretically, the AD would have an external email address associated with the user's AD account. I'm hoping that a process could be triggered when an email is received on this Exchange server and I could take some type of action to look up the AD account for the recipient, grab the external email address and send a standard email to that user. This would be a global rule for all Exchange accounts. The problem is that I don't know where to start. Thanks!

    Read the article

  • How to tell if any MySQL connections has been dropped or timed out?

    - by Continuation
    A client is using PHP to connect to MySQL. The PHP scripts and the MySQL database are located on 2 different Linux servers. He complained that database connections were being dropped or timed out and asked me to take a look. Is there any place in MySQL that can show me what and how many connections have been dropped or timed out? I looked into slow query log and didn't see anything. Any suggestions on how to diagnose this dropped/timed out database connection problem? Thanks EDIT: Slow query log is enabled in my.cnf: log-slow-queries=/var/log/mysql-slow-queries.log And when I do a mysql> show global status; I got: | Slow_queries | 11402347 | So there are a lot of slow queries. But the file /var/log/mysql-slow-queries.log doesn't exist. Why is that?

    Read the article

  • Iozone: sensible settings for a server with lots of RAM

    - by Frank Brenner
    I have just acquired a server with: 2x quadcore Xeons 48G ECC RAM 5x 160GB SSDs on an LSI 9260-8i Before deploying the target platform, I'd like to collect as much benchmark data as possible, testing I/O with hardware RAID in various configurations, ZFS zRAID, as well as I/O performance on vSphere and with KVM virtualization. In order to see real disk I/O performance without cache effects, I tried running Iozone with a maximum file of more than twice the physical RAM as recommended in the documentation, so: iozone -a -g100G However, as one might expect, this takes far too long to be practicable. (I stopped the run after seven hours..) I'd like to reduce the range of record and file sizes to values that might reflect realistic performance for an application server, hopefully getting the run times to under an hour or so. Any ideas? Thanks.

    Read the article

  • Wrong version of php used be IIS

    - by user1070125
    Long ago I installed php5.2 using a installer on IIS. Recently I installed PHP Manager and upgraded to php5.3 simly by unzipping the downloaded file and then used the "Register new php version" in PHP Manager. I did this for the global scope for all sites. This works OK. Once in a blue moon errors are written to the php5.2 errorlog even though that verion is not used anymore. The errors are typical 5.3 specfic functions that are not supported in 5.2. How can it be? Can I safely uninstall php5.2 using the uninstaller (i think it added some stuff to the env paths)?

    Read the article

< Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >