Search Results

Search found 64711 results on 2589 pages for 'core data'.

Page 813/2589 | < Previous Page | 809 810 811 812 813 814 815 816 817 818 819 820  | Next Page >

  • Software mirroring (RAID1) versus "Fake Raid" for new Windows 7 install

    - by kquinn
    I've just ordered two new hard drives for my main desktop and a copy of Windows 7 Professional 64-bit. I'd like to do a clean install of Win7 onto the new drives (leaving my old XP Pro boot partition around for a while in case something goes disastrously wrong, etc.). I want to have them set up in mirrored (RAID-1) mode. My understanding is that Win7 Pro can do software mirroring, but can I set this up directly at install time? If so, how? Note that I'd like the disk to be split into three partitions (OS/Apps&Data/Bulk data), all of which should be mirrored. Would it be better (more reliable or faster) to use my motherboard's hardware RAID support? My motherboard is an older nVidia nForce 680i SLI, which is not the most stable of motherboards, and I'm not sure how trustworthy its RAID1 configuration might be (or if Win7 could even detect and install onto a hardware-mirrored volume). Also, the performance characteristics of RAID1 are rather different than RAID0 or RAID5, and I'm wondering if Win7's software mirroring might actually be faster than hardware RAID1 (for example, I'm more of a Unix admin when I have to wear the sysadmin hat, and I've had great success deploying ZFS; most hardware RAID1 implementations have to read both disks and compare results to look for data errors, but ZFS can read from only one disk in the mirror and just use the built-in checksum, meaning it can have up to 2x the number of reads in-flight, as long as there's no data corruption). Edit: Okay, my question about whether Windows 7 can do software mirroring has been answered, and it can. I'm still unsure whether Windows software RAID or my motherboard's hardware "fake RAID" function is a better choice, though. Remember, I'm only interested in mirroring -- not the more complicated striping or parity operations that generally show the poor performance of crappy motherboard RAID solutions.

    Read the article

  • Somewhat powerful server needed for computationally expensive stuff

    - by Dane Larsen
    So here's my problem. My Dad runs a company that does some rather computationally expensive stuff. This is not supercomputer level stuff, but it does take several hours to run the average job on his Core i7 desktop. He asked me to look into a way to have his customers use the code on an hourly basis, namely via a server. Ideally he'd be able to buy a box for about $1000, and hook it right up to our home connection. Unfortunately, the data that needs to be both sent and received is on the order of several hundred megs. We live in a rural area, and the fastest connection offered is 1.5Mbit/s. Download. It's like .3Mbit/s upload. Not workable. What are the options for this kind of thing? Ideally, we'd have about 2GB of ram, 300-500GB of storage, and a nice dual core, and it has to run some flavor of Linux. Any suggestions? Thanks in advance EDIT: Also, ideally the monthly price would be < $100 per month.

    Read the article

  • install zenoss on ubuntu, raise No valid ZENHOME error

    - by bxshi
    I've added an user with name zenoss, and set export ZENHOME=/usr/local/zenoss in ~/.bashrc under /home/zenoss, and when using echo $ZENHOME, it could show /usr/local/zenoss When install zenoss, I switched to zenoss and then run install.sh under zenoss-4.2.0/inst, when it tries to run Tests, the error occured. ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.zenoss.utils.ZenPacksTest Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 0.045 sec <<< FAILURE! Running org.zenoss.utils.ZenossTest Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.71 sec Results : Tests in error: testGetZenPack(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. testGetPackPath(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. testGetAllPacks(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. Tests run: 6, Failures: 0, Errors: 3, Skipped: 0 [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Zenoss Core ....................................... SUCCESS [27.643s] [INFO] Zenoss Core Utilities ............................. FAILURE [12.742s] [INFO] Zenoss Jython Distribution ........................ SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 40.586s [INFO] Finished at: Wed Sep 26 15:39:24 CST 2012 [INFO] Final Memory: 16M/60M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.8:test (default-test) on project utils: There are test failures. [ERROR] [ERROR] Please refer to /home/zenoss/zenoss-4.2.0/inst/build/java/java/zenoss-utils/target/surefire-reports for the individual test results.

    Read the article

  • Unable to write DVD-R(Blank DVD's)

    - by FrozenKing
    I have a problem in dvd drive i.e. It can read CD/DVD and can write CD and all CD/DVD-RW but cannot write DVD DVD drive model is SH-S203B Samsung; I also have a log file created by nero burning rom 11. Actually the fact is no Blank DVD's are being read in my dvd drive only previously written dvd's can be read! Is this the problem of OS or should I try cleaning the dvd drive or my DVD drive is 4yrs old so is it going to spoil now, since it is showing this type of symptoms! OS = WinXP AV = KIS 2012 DVD Drive = Samsung SH-S203B (Also tried latest firmware and downgrade versions also) IA32 Nero Version: 11.2.4.100 Internal Version: 11,2,4,100 Recorder: <TSSTcorp CDDVDW SH-S203B>Version: SB04 - HA 1 TA 0 - 11.2.4.100 Adapter driver: <Serial ATA> HA 1 Drive buffer : 2048kB Bus Type : via Inquiry data CD-ROM: <TSSTcorp CDDVDW SH-S203B >Version: SB04 - HA 1 TA 0 - 11.2.4.100 Adapter driver: <Serial ATA> HA 1 18:58:10 #37 SPTI -1511 File SCSIPassThrough.cpp, Line 224 CdRom0: SCSIStatus(x02) WinError(0) NeroError(-1511) CDB Data: 0x28 00 00 00 00 00 00 00 10 00 Sense Key: 0x04 (KEY_HARDWARE_ERROR) Sense Code: 0x3E Sense Qual: 0x02 Sense Area: 0x70 00 04 00 00 00 00 0A 00 00 00 00 3E 02 Buffer x08047340: Len x8000 18:58:10 #38 SectorVerify 20 File Cdrdrv.cpp, Line 12057 Read errors from sector 0 to 14 <Padding> 18:58:19 #39 SPTI -1511 File SCSIPassThrough.cpp, Line 224 CdRom0: SCSIStatus(x02) WinError(0) NeroError(-1511) CDB Data: 0x28 00 00 00 00 10 00 00 10 00 Sense Key: 0x04 (KEY_HARDWARE_ERROR) Sense Code: 0x3E Sense Qual: 0x02 Sense Area: 0x70 00 04 00 00 00 00 0A 00 00 00 00 3E 02 Buffer x08047340: Len x8000 18:58:19 #40 SectorVerify 21 File Cdrdrv.cpp, Line 12057 Read error at sector 15 <Virtual Multisession Info> 18:58:19 #41 SectorVerify 20 File Cdrdrv.cpp, Line 12057 Read errors from sector 16 to 18 <Volume Structure Descriptor Sequence> 18:58:28 #42 SPTI -1511 File SCSIPassThrough.cpp, Line 224 CdRom0: SCSIStatus(x02) WinError(0) NeroError(-1511) CDB Data: 0x28 00 00 00 00 20 00 00 10 00 Sense Key: 0x04 (KEY_HARDWARE_ERROR) Sense Code: 0x3E Sense Qual: 0x02 Sense Area: 0x70 00 04 00 00 00 00 0A 00 00 00 00 3E 02 Buffer x08047340: Len x8000

    Read the article

  • One Active Directory, Multiple Remote Desktop Services (Server 2012 solution)

    - by Trinitrotoluene
    What I am trying to do is quite complex, so I figured I'd throw it out to a wider audience to see if anyone can find a flaw. What I am trying to do (as an MSP/VAR) is design a solution that will give multiple companies a session based remote desktop (companies that need to be kept completely seperate), using only a handful of servers. This is how I imagine it at the moment: CORE SERVER - Server 2012 Datacentre (All below are HyperV servers) Server1: Cloud-DC01 (Active Directory Domain Services for mycloud.local) Server2: Cloud-EX01 (Exchange Server 2010 running multi tenant mode) Server3: Cloud-SG01 (Remote Desktop Gateway) CORE SERVER 2 - Server 2012 Datacentre (All below are HyperV servers) Server1: Cloud-DC02 (Active Directory Domain Services for mycloud.local) Server2: Cloud-TS01 (Remote Desktop Session Host for Company A) Server3: Cloud-TS02 (Remote Desktop Session Host for Company B) Server4: Cloud-TS03 (Remote Desktop Session Host for Company C) What I thought about doing was setting up each Organisation in their own OU (perhaps creating their OU structure based on the Excahnge 2010 tenant OU structure so the accounts are linked). Each company would get a Remote Desktop Session Host server that would also serve as a file server. This server would be seperated from the rest on its own range. The server Cloud-SG01 would have access to all these networks and route the traffic to the appropriate network when a client connects and authenticated so they are pushed onto the correct server (Based on session collections in 2012). I won't lie this is something I have come up with quite quickly so there may well be something gapingly obvious that I am missing. Any feedback would be appreciated.

    Read the article

  • rsync general question

    - by CaptnLenz
    I'm trying to use rsync. At first, everything looks very good: rsync -Pniahv -e ssh /home/xxx/Videos/ [email protected]:"/shares/Public/Shared\ Videos/" --stats ... <f+++++++++ Serien/blah.avi <f+++++++++ Serien/blah S01E01 <f+++++++++ Serien/blah - S01E02 <f+++++++++ Serien/blah - S01E03 <f+++++++++ Serien/blah - S01E04 <f+++++++++ Serien/blah - S01E05 <f+++++++++ Serien/blah - S01E06 <f+++++++++ Serien/blah - S01E07 ... Number of files: 232 Number of files transferred: 223 Total file size: 118.24G bytes Total transferred file size: 117.51G bytes Literal data: 0 bytes Matched data: 0 bytes File list size: 9.46K File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 10.18K Total bytes received: 712 After that, i copied some of the files manually and runned rsync again in dry mode: rsync -Pniahv -e ssh /home/xxx/Videos/ [email protected]:"/shares/Public/Shared\ Videos/" --stats ... <f..tpo.... Serien/blah.avi <f..tpo.... Serien/blah S01E01 <f..tpo.... Serien/blah - S01E02 <f..tpo.... Serien/blah - S01E03 <f..tpo.... Serien/blah - S01E04 <f..tpo.... Serien/blah - S01E05 <f..tpo.... Serien/blah - S01E06 <f..tpo.... Serien/blah - S01E07 ... Number of files: 232 Number of files transferred: 223 Total file size: 118.24G bytes Total transferred file size: 117.51G bytes Literal data: 0 bytes Matched data: 0 bytes File list size: 9.46K File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 10.18K Total bytes received: 712 Why hasn't changed something in the --stats, although only the permissions and the timestamp have to be updated and not the full files need to be copied?

    Read the article

  • Can I have damaged a CPU ?

    - by Pascal
    Hey guys, This is a question I asked on Anantech forums, but I just discovered this community and think this question would fit right in. Here goes: I built a PC around a Q9550 1 1/2 years ago (MoBo is an Asus P5Q deluxe). Specs give 2.83GHz, OC'ed it to 3.40 GHZ without any problem (or so I thought) till 2 months ago. Cooling is provided by stock Intel Fan... 2 Months ago, I began to see random crashes, bios saying CPU overheat error... PC would reboot at OC'ed speed without any problem. Since last saturday and a few more crashes, PC won't reboot at 3.40 GHz, and even at stock speed (2.83 GHz), I got core temperatures of (60 C idle, 95 C under load) on the first two cores. This is the 4 core temperatures I am talking about, not the T-CPU which obvioulsy is lower. Fan is running at a steady 2000 RPM. Questions for you : 1. Is 2000 RPM the normal speed of the Intel fan or is my fan somehow broken (which could explain the overheating). In this case, any recommendation for a good fan for OCing ? 2. Hypothesis I fear is the right one: can the CPU have been slowly damaged over time by this OCing, meaning there is nothing much to do except waiting for it to die ? (As a side note, I am surprised that the 9550 is still around 300 $CDN here... Thought it would have been cheaper with all those i3/i5/i7 around). Any help or advice would be more than welcome...

    Read the article

  • Debugging apache seg fault with gdb

    - by Joyce Babu
    Apache on a production server of mine is seg faulting intermittently. I have enabled core dump option in apache configuration and have several dumped core files. Unfortunately, since it is a production server, apache or the loaded modules are not compiled with debug symbols. From what I understand, gdb cannot do much without debug symbols. Can I at least find out which module is causing the seg fault, without debug symbols? If so, how? Following is the output from a gdb backtrace (gdb) bt full #0 0xb7f1f832 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2 No symbol table info available. #1 0xb7be82bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/libpthread.so.0 No symbol table info available. #2 0xb771652a in ?? () from /usr/local/apache/modules/mod_pagespeed.so No symbol table info available. #3 0xb75df576 in ?? () from /usr/local/apache/modules/mod_pagespeed.so No symbol table info available. #4 0xb7715c20 in ?? () from /usr/local/apache/modules/mod_pagespeed.so No symbol table info available. #5 0xb7be4a49 in start_thread () from /lib/libpthread.so.0 No symbol table info available. #6 0xb7b2a63e in clone () from /lib/libc.so.6 No symbol table info available. Does this mean that /lib/ld-linux.so.2 is causing the seg fault?

    Read the article

  • Script launching 3 copies of rsync

    - by organicveggie
    I have a simple script that uses rsync to copy a Postgres database to a backup location for use with Point In Time Recovery. The script is run every 2 hours via a cron job for the postgres user. For some strange reason, I can see three copies of rsync running in the process list. Any ideas why this might the case? Here's the cron entry: # crontab -u postgres -l PATH=/bin:/usr/bin:/usr/local/bin 0 */2 * * * /var/lib/pgsql/9.0/pitr_backup.sh And here's the ps list, which shows two copies of rsync running and one sleeping: # ps ax |grep rsync 9102 ? R 2:06 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log 9103 ? S 0:00 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log 9104 ? R 2:51 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log And here's the uber simple script that seems to be the cause of the problem: #!/bin/sh LOG="/var/log/pgsql-pitr-backup.log" base_backup_dir="/var/lib/pgsql/9.0/backups" wal_archive_dir="$base_backup_dir/wal_archives" pitr_archive_dir="$base_backup_dir/pitr_archives" timestamp=`date +%Y%m%d%H%M%S` backup_dir="$pitr_archive_dir/$timestamp" mkdir -p $backup_dir echo `date` >> $LOG /usr/bin/psql -U postgres -c "SELECT pg_start_backup('$backup_dir');" rsync -avW /var/lib/pgsql/9.0/data/ $backup_dir/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log /usr/bin/psql -U postgres -c "SELECT pg_stop_backup();"

    Read the article

  • guest crash on long backup via rsync

    - by ToreTrygg
    I recently upgraded host to Ubuntu 9.10 with vmware server 2.0.2, i had two guest machine. One is a sme server i had several crash during a session of backup with rsync to another pc. Normal activities run regularly. The other guest is up without problem since 25 days. I found in the log a lot o f row like these Dec 20 05:29:27.445: vcpu-1| VLANCE: Ethernet0 skipped 2560 time(s) Dec 20 05:29:27.445: vcpu-1| VLANCE: 66 12 5 8 2 3 3 0 1 0 0 1 0 1 2 0 Dec 20 05:29:27.445: vcpu-1| VLANCE: 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 2452 Dec 20 05:29:27.651: vmx| ide0:0: Command WRITE(10) took 1.947 seconds (ok) Dec 20 05:29:37.945: vmx| ide0:0: Command WRITE(10) took 1.033 seconds (ok) when the vitual machine crash the log report, I paste here only some part to limit the lenght of the message Dec 27 01:48:05.686: Worker#2| Caught signal 6 -- tid 700 Dec 27 01:48:05.686: Worker#2| SIGNAL: eip 0x460422 esp 0xb124c024 ebp 0xb124c03 Dec 27 01:48:05.712: Worker#2| SymBacktrace12 00000000 eip 0x39d7ee in function clone in object /lib/tls/i686/cmov/libc.so.6 loaded at 0x2d1000 Dec 27 01:48:05.719: Worker#2| Unexpected signal: 6. Dec 27 01:48:05.720: Worker#2| Core dump limit is 0 KB. Dec 27 01:48:05.762: Worker#2| Child process 10455 failed to dump core (status 0 x6). Dec 27 01:48:05.762: Worker#2|SymBacktrace13 00000000 eip 0x39d7ee in function clone in object /lib/tls/i686/cmov/libc.so.6 loaded at 0x2d1000 Dec 27 01:48:05.779: Worker#2|Msg_Post: Error Dec 27 01:48:05.780: Worker#2|http://msg.log.error.unrecoverable VMware Server unrecoverable error: (Worker#2) Dec 27 01:48:05.780: Worker#2|Unexpected signal: 6. I have no idea how to solve the problem with this installation, I think to dowgrade the host to a version more compatible with vmware server 2. I read a lot of post about difficult of installation I think the problem of compilation during install could be related to my problem. Excuse me if the post isn't very clear, it's my first post here. Any help or suggest will be appreciated. Thanks

    Read the article

  • High load average due to high system cpu load (%sys)

    - by Nick
    We have server with high traffic website. Recently we moved from 2 x 4 core server (8 cores in /proc/cpuinfo), 32 GB RAM, running CentOS 5.x, to 2 x 4 core server (16 cores in /proc/cpuinfo), 32 GB RAM, running CentOS 6.3 Server running nginx as a proxy, mysql server and sphinx-search. Traffic is high, but mysql and sphinx-search databases are relatively small, and usually everything works blazing fast. Today server experienced load average of 100++. Looking at top and sar, we noticed that (%sys) is very high - 50 to 70%. Disk utilization was less 1%. We tried to reboot, but problem existed after the reboot. At any moment server had at least 3-4 GB free RAM. Only message shown by dmesg was "possible SYN flooding on port 80. Sending cookies.". Here is snippet of sar 11:00:01 CPU %user %nice %system %iowait %steal %idle 11:10:01 all 21.60 0.00 66.38 0.03 0.00 11.99 We know that this is traffic issue, but we do not know how to proceed future and where to check for solution. Is there a way we can find where exactly those "66.38%" are used. Any suggestions would be appreciated.

    Read the article

  • Apache ScriptAlias and access error?

    - by Parhs
    First of all after much pain i figured out how to make it work in Apache 2.4 windowz. Here is my configuration that seems to work successfully for git clone and push and everything. Problem First of all my configuration works. There is a "Require all denied" at / directory. I want only git functionality and nothing else. Example Request from a git client 192.168.100.252 - - [07/Oct/2012:04:44:51 +0300] "GET /git/simple/info/refs?service=git-upload-pack HTTP/1.1" 200 264` Error caused by that Request [Sun Oct 07 04:44:51.903334 2012] [authz_core:error] [pid 6988:tid 956] [client 192.168.100.252:13493] AH01630: client denied by server configuration: C:/git-server/web/simple There isnt any error at gitclient everything works fine but i get this at error log. Is there any solution for this error to not appear?I worry about log size. <VirtualHost *:80> DocumentRoot "C:\git-server\web" ServerName git.****censored**** DirectoryIndex index.php SetEnv GIT_PROJECT_ROOT c:/git-server/repositories SetEnv GIT_HTTP_EXPORT_ALL SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER ScriptAlias /git/ "C:/Program Files (x86)/Git/libexec/git-core/git-http-backend.exe/" <LocationMatch "^/.*/git-receive-pack$"> Options +ExecCGI AuthType Basic AuthName intranet AuthUserFile "C:/git-server/config/users" Require valid-user </LocationMatch> <Directory /> Options All Require all denied </Directory> <Directory "C:\Program Files (x86)\Git\libexec\git-core"> Options +ExecCGI Options All Require all granted </Directory> </VirtualHost>

    Read the article

  • Replacing all disks in a non-OS RAID 5 volume

    - by molecule
    Hi all, We currently have a server with 8 x HDD slots. It is a HP DL380G5 with a P400 controller. 2 x HDD are in a RAID 1+0 config and this hosts the OS. 6 x HDD are in a RAID 5 config and holds an Oracle DB. Basically the RAID 5 volume is running out of space and we would like to swap all 6 with higher capacity disks. Excuse my ignorance as I am pretty new to this... I believe we will need to backup the data, delete the RAID volume, insert the new disks, recreate the volume, and restore the data. 2 questions: Do we need to worry about the OS partition or is it completely independent so we can simply take out the 6 and insert 6 new disks and get the controller to recognize the 6 new disks and form a new RAID 5 volume? We should not need to reinstall OS or Oracle correct? Since we are going to restore the data on the volume from another source (our vendor will take care of this) but we would like to keep the existing data on the 6 disks just in case we run into issues and want to fall back, is this possible? Thanks in advance.

    Read the article

  • Permission issue for apache

    - by Aamir Adnan
    Environment Details: Amazon Ec2 Ubuntu 12.04 Django + mod_wsgi + python 2.6 web server: apache2 I have mounted a 10GB ebs volume to an instance to /mnt/ebs1/. After mounting the volume and formatting, I have placed all my project files in /mnt/ebs1/project. the wsgi file is in /mnt/ebs1/project/apache/django.wsgi. The content of wsgi file is: import os, sys sys.path.insert(0, '/mnt/ebs1/project') sys.path.insert(1, '/mnt/ebs1') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.configs.common.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() My httpd.conf file looks as: LoadModule wsgi_module /usr/lib/apache2/modules/mod_wsgi.so WSGIPythonHome /usr/bin/python2.6 WSGIScriptAlias / /mnt/ebs1/project/apache/django.wsgi <Directory /mnt/ebs1/project> Order allow,deny Allow from all </Directory> <Directory /mnt/ebs1/project/apache> Order allow,deny Allow from all </Directory> Alias /static/ /mnt/ebs1/project/static/ <Directory /mnt/ebs1/project/static> Order deny,allow Allow from all </Directory> The above configurations gives me Forbidden: You don't have permission to access / on this server. I tried to find the user which is running apache using ps aux which is www-data and has group www-data. I have tried to change the ownership of /mnt/ebs1 and its subdirectories using chown -R www-data:www-data /mnt/ebs1 but that still does not solve the problem. Can any one tell me what I am doing wrong or have missed?

    Read the article

  • Is this processor burned?

    - by Jhonnytunes
    I've recently exchange the processors in two PCChips boards. Both boards are LGA775. Board A is P17G(Pentium 4 HyperThreading 3GHz) and board B is P49G(Pentium Dual Core 3GHz). I use board A to watch videos, and some of them are 3GB size and this is why I exchanged the CPU. I installed Dual Core in board A and it worked out of the box, now 3GB videos use 5% of CPU instead of 50%. When I installed the pentium in the board B, I forgot to connect the 4pin power and, when i powered on the PC, the CPU fan stay off. Then, I connected all right this time, and now the board doesnt show video. I think the CPU is not working but im not sure about that. The PC turns on and the HD spins, the CPU fan spins, network socket blinking, but not video and case power led is neither blinking. I tried with other PSU and everything was the same. I figure out that CPU have that paste above. IDK really what's happening, I hope I dont have to buy another CPU. Is it Burned?

    Read the article

  • Excael 2007: Name range problems when linking workbooks

    - by Mike
    I've 30+ workbooks each with 5 specific worksheets (formated the same). Each worksheet's data needs to be linked to a master workbook, so that I end up with 5 master workbooks and all the specific data in one long table format $A$2:$I$750. (Are you still with me? ;)) I don't have access to a database, so I'm having to link the sheets to their master workbook directly. I've highlighted the data I need; named the range; and then tried referencing this from my master workbook. I get the #Value error symbol when I try to link (=[WorkbookName]!MyNamedRange) to a cell that doesn't match the top left cell of my range. Example: MyNamedrange is always =$A$2:$I43$ on one specific sheet. On my master workbook it works if it's referenced at A2 but I get #Value if it's referenced A1, or A44. Any ideas? I'm trying to link my data in one continous table so I can run a pivot on it, and other things. Can it be done like this, or should I just copy and paste? I'm trying to keep things 'linked'so I do not need to spend time C&Ping all day. Many thanks Mike.

    Read the article

  • How to setup NTFS ACL with Acces Based Enumeration

    - by Patrick Pellegrino
    We're in the process of migrating from Novell Netware to Windows 2K8 R2 infrastructure (AD, File server, print server... etc) My question is about ACL. While Netware and Windows are totally different, I want to be sure my thnking is good before screwing everything up! There's a scenario : F: | +-- DATA <= Shared as DATA with Access based enumeration | +-- Folder 1 +-- Team 1's Folder +-- Team 2's Folder ... In that case, by default, rights are herited from the F: to the deepest folders. What we want : Administrators group have full control top - down. From DATA, ABE list only folders that users have access. (ex. : I'm in group Team 2, I see Team 2's Folder). From what I understand, at DATA I remove all NTFS ACL to be herited (ex. Users Group), be sure to keep Administrators Group and SYSTEM user. After that, grant Full control (or any right needed) on each folder to Groups or Users that have to have access. Does I'm wrong ? Anything I should take care of ? Any help to my understanding will be very appreciated. Regards.

    Read the article

  • Does btrfs balance also defragment files?

    - by pauldoo
    When I run btrfs filesystem balance, does this implicitly defragment files? I could imagine that balance simply reallocates each file extent separately, preserving the existing fragmentation. There is an FAQ entry, 'What does "balance" do?', which is unclear on this point: btrfs filesystem balance is an operation which simply takes all of the data and metadata on the filesystem, and re-writes it in a different place on the disks, passing it through the allocator algorithm on the way. It was originally designed for multi-device filesystems, to spread data more evenly across the devices (i.e. to "balance" their usage). This is particularly useful when adding new devices to a nearly-full filesystem. Due to the way that balance works, it also has some useful side-effects: If there is a lot of allocated but unused data or metadata chunks, a balance may reclaim some of that allocated space. This is the main reason for running a balance on a single-device filesystem. On a filesystem with damaged replication (e.g. a RAID-1 FS with a dead and removed disk), it will force the FS to rebuild the missing copy of the data on one of the currently active devices, restoring the RAID-1 capability of the filesystem.

    Read the article

  • HTTPS and Certification for dummies

    - by Poxy
    I had never used https on a site and now want to try it. I did some research, but not sure that I understood everything. Answers and corrections are greatly appreciated. Here we go: To use https I need to generate ‘private’ and ‘public’ keys for the web server I use. In my case it’s apache (manual: http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html) Https protocol should be bind to port 443. Q: How to do it? Is it done by default? Where can I check configuration? Aplying https. Q: If I see https in browser does it mean that the data traffic on the page IS encrypted? Any form on the page would submit data via https? Though all the data gonna be encrypted, the browsers would still show ugly red messages. This is just because they do not know anything about my certificate. They have about a hundred certificates pre-installed but mine is not one of them, obviously. But the data IS encrypted by https. If I want browsers to recognize my certificate, I would need to have it signed by one of the certification authorities (ca) that has its certificate pre-installed (e.g. thawte, geotrust, rapidssl etc). UPD: To reed about ssl/tsl: The First Few Milliseconds of an HTTPS Connection, I found it very informative. Examples for PHP (openssl.org) of how to make use of ssl/tsl on the server side are published here.

    Read the article

  • Google Chrome warning that Javascript is disabled

    - by kirchoffs415
    I hope somebody can help. I keep getting the following message when I log on: Your Javascript is disabled. Limited functionality is available. It will stay for maybe a day sometimes two. I have uninstalled javascript and reinstalled but still the same. I am using chrome. Any help would be grateful many thanks Dominic My system spec is as follows System InformationOS Name Microsoft® Windows Vista™ Home Premium Version 6.0.6002 Service Pack 2 Build 6002 Other OS Description Not Available OS Manufacturer Microsoft Corporation System Name DOM-PC System Manufacturer Dell Inc. System Model Inspiron 1545 System Type X86-based PC Processor Pentium(R) Dual-Core CPU T4200 @ 2.00GHz, 2000 Mhz, 2 Core(s), 2 Logical Processor(s) BIOS Version/Date Dell Inc. A05, 25/02/2009 SMBIOS Version 2.4 Windows Directory C:\Windows System Directory C:\Windows\system32 Boot Device \Device\HarddiskVolume3 Locale United Kingdom Hardware Abstraction Layer Version = "6.0.6002.18005" User Name DOM-PC\DOM Time Zone GMT Standard Time Installed Physical Memory (RAM) 3.00 GB Total Physical Memory 2.96 GB Available Physical Memory 1.38 GB Total Virtual Memory 5.89 GB Available Virtual Memory 4.25 GB Page File Space 3.00 GB Page File C:\pagefile.sys My System Specs

    Read the article

  • Samba4/Ubuntu Shares Incorrectly Available to All Users

    - by Dan
    I've got my Ubuntu server working with Samba4 and got it set up as the Primary domain controller on my network with AD and all that goodness. However, I'm trying to get my Samba configuration to work with the users and groups I've defined with the Active Directory tools from Windows. For instance, I've got a share X which I want users A and B (as part of the 'management' group, known as LLGrpManager in my setup) to see, but no body else. However, after making changes to the configuration, restarting Samba, I test by connecting to the share with my Mac over Samba as user 'C' which isn't part of the management group, and I can, incorrectly, see the X share. I've tried alsorts of combinations of specifying the group with no luck at all. I've got a feeling that my global config might be too lenient or something to do with file permissions but being a bit green, I'm without clue. My /etc/samba/smb.conf # Global parameters [global] server role = domain controller server string = Office Server workgroup = LLDOMAIN realm = lldomain.local netbios name = DUMBO passdb backend = samba4 logon path = \\%L\profiles\%U logon drive = L: log file = /var/log/samba/%m.log max log size = 50 security = ads domain logons = yes domain master = auto usershare allow guests = no valid users = %S [netlogon] path = /var/lib/samba/sysvol/lldomain.local/scripts read only = no guest ok = no [sysvol] path = /var/lib/samba/sysvol read only = No guest ok = no valid users = @LLDOMAIN\LLGrpManager [ShareX] path = /data comment = Entire Data Volume guest ok = no comment = Entire Data Volume guest ok = no valid users = @LLDOMAIN\LLGrpManager admin users = @LLDOMAIN\LLGrpManager browsable = no inherit acls = yes inherit permissions = yes ... My /etc/nsswitch.conf I've also instructed the system to use the nss winbind library when searching for users or groups by adding the stanza passwd and group in /etc/nsswitch.conf: passwd: compat winbind group: compat winbind shadow: compat Permissions on the folder in question drwxrwxrwt 8 root root 4.0K Oct 28 19:11 data

    Read the article

  • Copy/paste filtered column in Excel - error message

    - by hazymat
    Firstly I should state that I don't believe in spreadsheets; my normal mode of operation is that data should either exist in a database or a text file... However - employment requires... In short, I have filtered a worksheet by column A, and I want to copy/paste from column B to column C. Obviously I don't wish to copy/paste values from rows that were filtered-out here. The above sounds ridiculously simple, right? First I tried simply copy/pasting on the filtered worksheet. This appeared to select and copy only the filtered data, however pasting appeared to insert values into hidden/filtered rows - as you might expect. So my initial research suggests I may wish to select the filtered data and press Alt+; (that is, ALT plus semicolon), which is a shortcut key for Goto Special Select Visible. Then just copy-paste. CTRL+C correctly copies the filtered data, however when I go to paste the values into another column, it pastes into hidden rows as well. Okay, so perhaps I should also "Select Visible" on the cells I wish to paste into as well? Nope - that gives me the error That command cannot be used on multiple selections. What am I doing wrong?!

    Read the article

  • Tripwire help Required

    - by ramaperumal
    I have created the policy file in Tripwire and also I have created the rules as well mentioned below: /opt/jboss/server/gis/conf -> $(SEC_CONFIG) +aipm +c+g+a+i+s+t+u+l+M; /usr/local/gtech/eseries/ -> $(SEC_CONFIG) +a+c+g+i+s+t+u+l+M ; After running the integrity check the output should be a(Access timestamp),c (Inode timestamp (create/modify),g (File owner's group ID),i (Inode number),s (File size),t (time stamp),u (File owner's user ID),l(File is increasing in size (a "growing file"),M (MD5 hash value). I am getting the output as below: [root@xxsi1242 tripwire]# tripwire --check Parsing policy file: /etc/tripwire/tw.pol *** Processing Unix File System *** Performing integrity check... Wrote report file: /var/lib/tripwire/report/xxsi1242.gtk.gtech.com-20131106-053812.twr Open Source Tripwire(R) 2.4.1 Integrity Check Report Report generated by: root Report created on: Wed 06 Nov 2013 05:38:12 AM EST Database last updated on: Wed 06 Nov 2013 05:31:17 AM EST =============================================================================== Report Summary: =============================================================================== Host name: xxsi1242.gtk.gtech.com Host IP address: 156.24.65.171 Host ID: None Policy file used: /etc/tripwire/tw.pol Configuration file used: /etc/tripwire/tw.cfg Database file used: /var/lib/tripwire/xxsi1242.gtk.gtech.com.twd Command line used: tripwire --check =============================================================================== Rule Summary: =============================================================================== ------------------------------------------------------------------------------- Section: Unix File System ------------------------------------------------------------------------------- Rule Name Severity Level Added Removed Modified --------- -------------- ----- ------- -------- Invariant Directories 66 0 0 0 Temporary directories 33 0 0 0 * Tripwire Data Files 100 0 0 1 Tech Stack 100 0 0 0 User binaries 66 0 0 0 Tripwire Binaries 100 0 0 0 * CLPS bins 100 0 0 2 CLPS Configuration files 100 0 0 0 ESCommon 100 0 0 0 Shell Binaries 100 0 0 0 OS executables and libraries 100 0 0 0 Security Control 100 0 0 0 ESCommon Configuration 100 0 0 0 (/etc/gtech/escommon) Total objects scanned: 12358 Total violations found: 3 =============================================================================== Object Summary: =============================================================================== ------------------------------------------------------------------------------- # Section: Unix File System ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- Rule Name: Tripwire Data Files (/etc/tripwire/tw.pol) Severity Level: 100 ------------------------------------------------------------------------------- Modified: "/etc/tripwire/tw.pol" ------------------------------------------------------------------------------- Rule Name: CLPS bins (/opt/jboss/server) Severity Level: 100 ------------------------------------------------------------------------------- Modified: "/opt/jboss/server/esapps1/data/hypersonic/localDB.lck" "/opt/jboss/server/gis/data/hypersonic/localDB.lck" =============================================================================== Error Report: =============================================================================== No Errors ------------------------------------------------------------------------------- *** End of report *** Note: In the output I only am getting the files which are modified. I need the detail output for this. But unfortunately I am not getting what I expected. Please help me to proced further.

    Read the article

  • How to check DVD integrity at max read speed of DVD writer

    - by ashishsony
    I need to check the integrity of burned DVDs so that I can be sure about my backed-up data. I use DL-DVDs to take the backup. Earlier I used VSO Inspector software for the same but the day I switched to DL-DVDs the VSO Inspector gives me errors upon checking. I think the errors are because the switching of layer writing involves some dummy data somewhere. Secondly, it's damned slow for checking. I believe if there is a utility that can read all files (not the disk surface) and report if some files are unreadable would do the job. But it should be quick! Nobody wants to sit for disk checking for 3-4 hours after a quick 30 min data burn! I am looking for such a utility on Windows or Linux. Even scripts (python, etc) will do. I just want to be assured that the data is safe. Can someone help me in this? Thanks.

    Read the article

  • OC'ing a Sapphire 7950, problems with memory clock

    - by Cliff
    I bought a new rig with sapphire radeon 7950 flex w/ boost. Good scores initially in 3dmark 11, around 7300 stock. Problem is, as soon as i start overclocking, issues arise. the core clock is at 860, but rises to 925 with boost applied in pressured situations. So all the OC tools are showing 925 as base clock for some reason. Ocing the core clock has been no problem though, got up to 1200 pretty stable, but only an incremental increase in 3dmark 11, to 7600 from 7300, which is worrying. The real trouble starts when i start touching the memory clock. As soon as i touch it even by 1 point up to 1251mhz, the performance goes way down. suddenly i score under 5000 graphics in 3dmark11, no matter if its 1251hz or 1500hz. Ive tried adjusting every other parameter, different tools (sapphiretrixx, catalyst, afterburner) all still the same. Tried upping the power, still same. Where is the issue here?

    Read the article

< Previous Page | 809 810 811 812 813 814 815 816 817 818 819 820  | Next Page >