Search Results

Search found 64711 results on 2589 pages for 'core data'.

Page 816/2589 | < Previous Page | 812 813 814 815 816 817 818 819 820 821 822 823  | Next Page >

  • using i7 "gamer" cpu in a HPC cluster

    - by user1219721
    I'm running WRF weather model. That's a ram intensive, highly parallel application. I need to build a HPC cluster for that. I use 10GB infiniband interconnect. WRF doesn't depends of core count, but on memory bandwidth. That's why a core i7 3820 or 3930K performs better than high-grade xeons E5-2600 or E7 Seems like universities uses xeon E5-2670 for WRF. It costs about $1500. Spec2006 fp_rates WRF bench shows $580 i7 3930K performs the same with 1600MHz RAM. What's interesting is that i7 can handle up to 2400MHz ram, doing a great performance increase for WRF. Then it really outperforms the xeon. Power comsumption is a bit higher, but still less than 20€ a year. Even including additional part I'll need (PSU, infiniband, case), the i7 way is still 700 €/cpu cheaper than Xeon. So, is it ok to use "gamer" hardware in a HPC cluster ? or should I do it pro with xeon ? (This is not a critical application. I can handle downtime. I think I don't need ECC?)

    Read the article

  • High-performance Academic Server [closed]

    - by PHPsmith
    Suppose I want to build a server for the university's academic interests. The server is dedicated only to a site, where users (students and lecturers) just view and fill the academic data. But at a time (e.g. once a semester), about 12,000 students will access the site simultaneously. Due to limitation of resources, I have to build the server using free software (except for the operating system Windows 7, the university has been prepared). The hardware is also limited to the usual 4-core computers (eg, Ivy Bridge Intel Core i7-3770) with approximately 16GB of memory (DDR3 1600 MHz), equipped with an RJ-45 port (Intel 82 579 Gigabit Ethernet). With all these limitations, I have to choose the software (web server, database, etc) are appropriate for this purpose is achieved. I decided to create a site in PHP. Please help me by answering the following questions based on your expertise. (my prime candidate software to consider after googling) Web server which is faster & stable & secure, when implemented and optimized for PHP? And why? (nginx) PHP accelerator which is faster & stable & compatible with the selected web server? And why? (APC with Zend Optimizer+) Database which is faster & stable & secure, when implemented and optimized for selected web server and selected PHP accelerator? (MySQL) Are there any errors that have been or will be happening from my condition is? If there is, please enlighten me? Is there anything else I need to know in order to achieve this goal? If there is, please enlighten me? I understand that the performance also depends on the implementation of source-code program, so I assume it will create a site with the best efficiency (e.g. using AJAX).

    Read the article

  • Mesh Networked servers via vpn

    - by microspino
    I got a design idea and I would like to have some advice from SF about It. I have 5 customers with small real-estate databases. I've built for them a desktop app and now they would like to merge their database to share their data. I don't want to centralize everything in one place nor I want to do maintenance for servers. They told me also, that all of them in their offices, have little servers and maintenance guys available. Although everything seems suitable for web application, I had the idea to experiment something new: Any customer small-server wild be connected to the others in a sort of mesh network without a single point of failure and through VPNs. If one of the servers went down the customers could still connect to their databases from one of the other mesh networked servers instead of from the local one that is down. During normal operations all the servers sync the db with the others through VPNs. I can accept a half-day timing window of NON synched data, in other words, since I don't need real time synchronization, the server don't have to always stay in synch. I can migrate my data over to other Non-Sql technologies like CouchDB or Redis or whatever you suggest. As you can see I don't have a lot of constraints and although I could go with a web application I would like to delegate and decentralize support, data-privacy and management, as more as I can to my customers offices. Is that a crazy idea? Do you know If something similar exist? Which technology would you suggest?

    Read the article

  • The BitLocker encrypted logical drive of my laptop is not accessible. On clicking error appears,"Application not found"

    - by Nauman Khan
    I had an important personal data that was stored in my laptop drive 'F'. My 4 year old son also uses my laptop to play games. To secure my data I used bitlocker software that was already there in my windows 7 ultimate 32 bit. I am using a Dell D 630 Core2Duo laptop. The thing worked fine for me and I have been able to access my data in drive 'F' as and when I required. But today, when I tried to open my 'F' drive, an error box appeared saying "Application not found". I right clicked and checked 'properties' of 'F' drive. It showed me Used Space = 0 bytes and Free Space = 0 bytes. I opened 'Disk Management' which showed my 'F' drive file system as 'Unknown (Bitlocker Encrypted). 'Disk Management' is also showing my 'F' drive as healthy logical drive. I opened 'Manage bitlocker' and found that my 'F' drive was being shown locked and 'Unlock Drive' was displayed against it, however, when i click on 'Unlock Drive', it does not function. I opened 'TPM Administration' and found an information that 'Compatible TPM cannot be found'. My bitlocker encryption was working fine which means that I had a compatible TPM in my laptop. Where has it gone? How can I enable it? Is my 'F' Drive lost forever and thus the data in there as well?

    Read the article

  • when to upgrade server to include more cores, versus more processors, versus additional server?

    - by gkdsp
    The server hosting market is separated into single, double, qual, etc., processors, where each processor has several cores, or CPUs. My company will offer a Linux-based web application that relies on an Apache web server and a middle tier for business logic. The middle tier is used to crunch math, and return result to a client. Many clients may access the application simultaneously. The company will start with one processor having 4 cores. I'm trying to understand how the app uses the cores and then how to scale the application as business grows, in terms of servers/processors/cores. For example, I'd assume initially one core would be used for Apache, and the other 3 used to process client's requests for math crunching... Question 1: does that mean, with the 3 cores available, I can handle 3 separate client requests simultaneously (e.g. 1 for each of 3 cores)? I mean, except for the shared RAM, is this effectively like having 3 individual machines (from pt of view or processing client requests simulaneously)? Or, only one client's request may be processed at any one time, but that client's request is divided up into up to 3 cores depending on the type of process running that does the math crunching and whether or not it can take advantage of multi threading (so the # of cores impacts how fast any one client request completes)? I'm confused about what the cores mean to the application here. Question 2: As the business grows and more client requests need to be processed, should the server be upgraded to (A) a new machine with more cores, (B) a new machine with two processors, 4 cores each, or (C) keep the original server and add another server with a single processor? Which route provides the most efficient way to scale the application, in terms of processing more client requests per time interval? Is the choice, for example, limited by RAM (when you need more RAM than box can handle it's time to add another server), or something else? Question 3: Is the total number of client requests processed simultaneously equal to the number of cores times the number of servers (minus the one core for Apache)?

    Read the article

  • Small maximum number of connections on a Linux router

    - by Eugene
    I have a Linux box acting as a router with no iptables or other firewall and no networking applications running on it, just pure router. I've put it in a test environment that generates many TCP connections, each having unique source and destination IP, and those connections go through this router. I'm observing that number of connections successfully created rise to approximately 500 and then no more connections can be created for several minutes, then another 100 connections can be created and there is another pause, and so on. If 10 connections for each source-destination pair are created, then maximum numbers go about 10 times up, so the problem is probably with many connections from different IPs. As traffic is simply routed, it doesn't have to do with number of file descriptors, iptables connection tracking and other things often proposed to check in similar cases. The box has plenty of free RAM and CPU, both NICs are gigabit. The kernel is 2.6.32. I've already tried increasing net.core.*mem_max, net.core.netdev_max_backlog and txqueuelen on both NICs, with completely no effect. What else should I check ? Is there some rate-limit in the kernel itself ?

    Read the article

  • Need Help Scoping a Server to use for study (MCITP Ent Admin + SharePoint 2010)

    - by AVFamily76
    i need to study for mcitp, but i also need to study for sharepoint 2010 i have a poweredge 1850 with two single-core CPUs + two 73G drives - it kills me on electricity, so don't want to use it, and it won't do VT, but it could be one of three boxes for a lab that's cheap, but will cost a lot on electricity i was thinking . . . OPTION #1 Opteron 4170 HE (50 watt chip), 6-core, only two-bills ($200), but the board's are $250, so that's an $800 box, then get another box to dual-boot Win7/Hyper-V on the cheap...? OPTION #2 Used Quad - but how many VM's that are really banging away could it run at same time? (Server 2008r2, SQL 2008r2, Search Server) OPTION #3 Study from books and just get one box that can run two VM's at same time, even if slowly. the last time i had and used a home lab was five years ago when i had a DC, SQL, Exchange and business app box, that's where i got my server skills was just banging on it for four years, but didn't read any books, so now i have to get certified and know the material, and just am not sure how much attention i should pay to the box i use versus the studying time and reading. sorry it's a subjective question, and am obviously open to all sorts of abuse here, but hope you can tell me also how many VM's i can run at the same time given what they'll be doing (SQL and SharePoint FAST search server are resource hungry) thanks!

    Read the article

  • Is there a way to have "default" or "placeholder" values in Excel?

    - by Iszi
    I've got a spreadsheet with cells that I want to be user-editable, but that I also want to have "default" or "placeholder" values in, whenever there is no user-entered data. There's a couple good use cases for this: Prevent formula errors, while providing reasonable assumptions when a user has not entered (or has deleted) their own value. I could use conditional formatting to alert the user to default values, so as to prevent their ignorance of them - they can then make an informed choice as to whether that value is still appropriate or not for the intended calculations. Give a short description of what is intended to be entered in the cell, without having to have a separate "instructions" segment or document. This would also eliminate the need for a nearby "Label" cell, in some cases where it's really not appropriate. To accomplish what I want, I need some formula, script, or other advanced spreadsheet option that will do the following: Show the default value in the cell before user enters data. Allow the default value to be found by any formulas referencing the cell, when there is no user-entered data in that cell. Allow the user to freely (naturally, exactly as they would do with any "normal" cell) overwrite the displayed value with their own value or formula, and have the user-entered data found by any formulas referencing the cell. When cell is blanked by deletion of user input, revert to default value. Is there a way to do this in Excel, or am I asking too much of a spreadsheet program here?

    Read the article

  • Using the full width of an Excel chart with two Y-axes

    - by Jørn Schou-Rode
    I am trying to create a line chart in MicrosoftExcel 2007 with two data series, each with their own Y-axis. First, I create a simple chart by selecting the two data series, and choosing Insert > Charts > Line from the Ribbon. I now see the following chart in my workbook: I then continue my quest by right clicking one of the data series (lines) and choosing Format data series > Series Options > Secondary Axis. My chart is now looks like this: This is almost what I want. I did not expect to see the gap between the last X-axis tick point (x = 5) and the secondary (right most) Y-axis. Why does Excel introduce this gap? Is there anything I can do to avoid it? I have tried right clicking the X-axis and seleting Format Axis > Axis Options > Position Axis: Between tick marks, but that only introduces a similar gap on by the primary (left most) Y-axis.

    Read the article

  • suPHP not working

    - by amarc
    OS: Ubuntu 10.04 etc/suphp/suphp.conf: [global] ;Path to logfile logfile=/var/log/suphp/suphp.log ;Loglevel loglevel=info ;User Apache is running as webserver_user=www-data ;Path all scripts have to be in docroot=/home ;Path to chroot() to before executing script ;chroot=/mychroot ; Security options allow_file_group_writeable=false allow_file_others_writeable=false allow_directory_group_writeable=false allow_directory_others_writeable=false ;Check wheter script is within DOCUMENT_ROOT check_vhost_docroot=true ;Send minor error messages to browser errors_to_browser=false ;PATH environment variable env_path=/bin:/usr/bin ;Umask to set, specify in octal notation umask=0077 ; Minimum UID min_uid=100 ; Minimum GID min_gid=100 [handlers] ;Handler for php-scripts application/x-httpd-suphp="php:/usr/bin/php-cgi" ;Handler for CGI-scripts x-suphp-cgi="execute:!self" some vhost in sites-enabled: NameVirtualHost *:8080 <VirtualHost *:8080> ServerAdmin ... ServerName ... ServerAlias ... AddType application/x-httpd-php .php AddHandler application/x-httpd-php .php suPHP_Engine on suPHP_UserGroup user user suPHP_ConfigPath "/home/user/etc" suPHP_PHPPath /usr/bin DocumentRoot /home/user/web/site.com/ ErrorLog /var/log/apache2/site.com-error_log CustomLog /var/log/apache2/site.com-access_log common <Directory /home/user/web/site.com/> Order Deny,Allow Allow from all Options +Indexes </Directory> </VirtualHost> But when I did nano /home/user/web/id.php and paste <?php system('id'); ?> in it, result I get is: uid=33(www-data) gid=33(www-data) groups=33(www-data) Have no idea what to do so I was hoping comunity could help ty.

    Read the article

  • Change to different user, or let different user execute a command

    - by WG-
    I have a problem. There is a server which I can access with an account by ssh, lets say WG. Now there is a folder with the following permissions. drwxr-s---+ 855 vvz www-data 20K Aug 21 17:56 pictures I want to copy this folder using rsync, however since I am not the user www-data but WG I cannot execute rsync. So I want www-data to execute a rsync command. However, I do not posses sudo powers. My friend however tells me that I am actually able to execute the rsync command as www-data, but he will not tell me how. I asked him for some clues and he told me that it had something to do with reverse shell (which I figured out to be that you connect by ssh to your server and then you connect back to your own server, or something). I also asked if it was by-design or actually a flaw in the system. He tells me it is both. Furthermore I think it has something to do with the group permissions. If I just make sure that I am with the group permissions then I can also read the files. Anybody has a clue?

    Read the article

  • Backtrack, Wi-Fi not working

    - by hradecek
    I've installed Backtrack 5R3 KDE, and I realized that my wireless is not working, but wired is working fine. Here's the lshw output: *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 05 serial: 04:7d:7b:b7:46:f8 size: 100MB/s capacity: 100MB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8105e-1.fw ip=192.168.2.2 latency=0 link=yes multicast=yes port=MII speed=100MB/s resources: irq:42 ioport:2000(size=256) memory:f0404000-f0404fff memory:f0400000-f0403fff lspci output: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:14.0 USB Controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB Controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.1 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 2 (rev c4) 00:1d.0 USB Controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA AHCI Controller (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 05)

    Read the article

  • How do I connect remotely to SQL Server from Windows client?

    - by humble_coder
    Hi All, Having a bit of an issue connecting to SQL SERVER remotely from Windows. I've verified that all of my settings are correct via SQL SERVER MANAGEMENT STUDIO EXPRESS and SQL SERVER CONFIGURATION MANAGER. I can connect remotely using ODBC drivers from other OSes (e.g. OS X, Linux, etc). However, when I connect with the same credentials from a remote Windows machine using "SQL SERVER" as the driver I am told that the system cannot connect. I've tried creating an ODBC Data Source and I get the same error: Connection failed: SQLState: '01000' SQL Server Error: 14 [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]ConnectionOpen(InvalidInstance()). Connection failed: SQLState: '08001' SQL Server Error: 14 [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]Invalid Connection From the non-windows machines I can use the IP address of the SQL Server just fine. However, on the remote Windows machine, neither IP address nor named instance works. FYI, I can create an ODBC Data Source using the named instance on the machine actually running the SQL Server (but this is, of course, nothing special -- just proof that it isn't completely hosed). One interesting note: If I use SQL STUDIO 2005 from a Windows client machine, I can use the IP address to connect remotely. Still, the whole reason I bring this up is because I need to use a software package I've written to connect to SQL Server remotely from Windows machines as well. Previously the solution was only needed to xfer data from SQL Server into a PostGRES or MySQL database on non-Windows machines (due to DBA preference). However, now they also want to move the data from the legacy software to MySQL even on Windows. Any assistance would be most appreciated. Feel free to provide a full example connection string. Best

    Read the article

  • 3dmark score abnormal

    - by Sean
    I just bought a new laptop. It is core i7 3610QM.(Ivy bridge, 4 core plus hyperthreading), 8G ram, nvidia GT 610M 2GB. The OS is Win7 64bit. I ran 3dmark 11, but the score is frustrating. For 1280*720, the score is P730. It is too low right? I searched online, the score should at least above 1000. Am I right? I never use this software before. I know nvidia has optimus, so I made the laptop in high performance state and white-listed the 3dmark program. But there is no help. I am guessing 3dmark is using i7's graphic module. It cannot transform to nvidia. In the running detail of 3dmark, the graphic card cannot be identified(The row remain blank). Can anybody tell me is this the normal case? If not, can I use some other software to test if my nvidia card is working fine? If this nvidia card cannot work, I will return the new laptop asap. Thanks.

    Read the article

  • HD working with IDE USB adapter but not recognised by bios

    - by Rajeeva
    I have a Windows XP Pentium III desktop with two hard drives. The first one has the OS and is luckily working. The second drive on the secondary master IDE channel few days back was unable to read some files and since then for some time it was failing and reviving intermittently and now it is always showing as failed on the IDE channel When the HD was intermittenly failing, I was able to copy some data from it to the other drive - also during that time if the system was running and the hard disk failed at that time, the system froze and then i had to reboot. then I got a new 80 gb hdd similar (same make - seagate barracuda) to the earlier failing one, a new data cable for the drive and an IDE to USB adapter. the new hard drive i installed in the previous drive's place (secondary master), formatted it and it worked for 1 day and then it also failed - simultaneously i connected the old hd to the IDE/USB adapter and i could view all the data - some of that data i was able to back up from the old hd to the new hd before the new hd failed the new hd i have tried connecting on the primary channel as the slave disk but when i do that then the bios does not detect either the OS drive or the new drive and the system does not boot surprisingly, the older (previously failed) hd and the new hd are both working fine on the usb channel with the IDE/USB adapter. i have ruled out any problem with the secondary channel since the dvd rom i was earlier using as primary slave have now connected to secondary master and it works fine. i am really confused by this behavior on my system. please can anybody try to solve this for me. thanks.

    Read the article

  • CPU usage always below 10% in windows server 2008 r2 x64

    - by ???
    I am using a server with windows server 2008 r2 running on it to run my program. The CPU of the server is Intel xeon x5570 2.93GHz with 2 processors, 8 cores per processer. However, I found that the cpu usage is almost always below 10% even I use 32 threads in my program. And I also found that sometimes the cpu usage could reach as high as 93% through the task manager when running my program and at that moment my program has processed over 1000 files per second while normally, it only processed over 50 files per second. However, this does not happen often. I use tools downloaded from the internet to make sure no core sleeps when the server is on, nothing changed. Also, I edited the windows register to make sure that I, as an administer, have no cpu usage limit. But it changed nothing. Is there anyway that I can make full use of my cpu? That is to say that each core runs a thread of my program and the total cpu usage could reach over 50% when I use a reasonable number of threads in my program. Did this happen to anyone of you? And could you help me with this ? Thank you!

    Read the article

  • Cannot destroy ZFS snapshot: dataset already exists

    - by Morven
    I have a server (T5220, though I doubt it matters) running Solaris 10 8/07 and I have a ZFS pool, "mysql", on internal disk. Within it I have a filesystem "mysql/data/4.1.12", which I snapshot hourly with a script from cron. I have one snapshot, created as one of those hourly snaps, that will not destroy. I have renamed it out of sequence to be "mysql/data/4.1.12@wibble" so that my script will not try and fail to destroy it, but it was originally within the sequence, though I doubt that matters. It renames successfully. The snapshot can be successfully navigated and read from through the .zfs/snapshots directory. It has no clones based on it. Trying to destroy it does this: (265) root@web-mysql4:/# zfs destroy mysql/data/4.1.12@wibble cannot destroy 'mysql/data/4.1.12@wibble': dataset already exists (266) root@web-mysql4:/# which is apparently nonsensical: of course it already exists, that's the point! Anyone seen anything like this before? Web searches show nothing obviously similar. I can provide patches installed if necessary.

    Read the article

  • Servers - Buying New vs Buying Second-hand

    - by Django Reinhardt
    We're currently in the process of adding additional servers to our website. We have a pretty simple topology planned: A Firewall/Router Server infront of a Web Application Server and Database Server. Here's a simple (and technically incorrect) diagram that I used in a previous question to illustrate what I mean: We're now wondering about the specs of our two new machines (the Web App and Firewall servers) and whether we can get away with buying a couple of old servers. (Note: Both machines will be running Windows Server 2008 R2.) We're not too concerned about our Firewall/Router server as we're pretty sure it won't be taxed too heavily, but we are interested in our Web App server. I realise that answering this type of question is really difficult without a ton of specifics on users, bandwidth, concurrent sessions, etc, etc., so I just want to focus on the general wisdom on buying old versus new. I had originally specced a new Dell PowerEdge R300 (1U Rack) for our company. In short, because we're going to be caching as much data as possible, I focussed on Processor Speed and Memory: Quad-Core Intel Xeon X3323 2.5Ghz (2x3M Cache) 1333Mhz FSB 16GB DDR2 667Mhz But when I was looking for a cheap second-hand machine for our Firewall/Router, I came across several machines that made our engineer ask a very reasonable question: If we stuck a boat load of RAM in this thing, wouldn't it do for the Web App Server and save us a ton of money in the process? For example, what about a second-hand machine with the following specs: 2x Dual-Core AMD Opteron 2218 2.6Ghz (2MB Cache) 1000Mhz HT 16GB DDR2 667Mhz Would it really be comparable with the more expensive (new) server above? Our engineer postulated that the reason companies upgrade their servers to newer processors is often because they want to reduce their power costs, and that a 2.6Ghz processor was still a 2.6Ghz processor, no matter when it was made. Benchmarks on various sites don't really support this theory, but I was wondering what server admin thought. Thanks for any advice.

    Read the article

  • Replicated filesystem and EC2 MySQL

    - by El Yobo
    I'm currently investigating migrating our infrastructure over to run on Amazon's EC2 and am trying to figure out the best way to set up a MySQL service. I'm leaning towards running our own MySQL instances, rather than going with Amazon's RDS, but am still considering the best approach for performance and cost on the instance itself. In order to have persistent data, the MySQL data needs to be on an EBS volume (with some form of striped RAID, e.g. RAID0 or RAID10) to improve persistence. However, EBS IO is limited by the network interface (gigabit, so a theoretical maximum of 128 MB/s), while the ephemeral volumes have no such problem. I did see a suggestion for running two MySQL servers on an instance, with a master running on the ephemeral disk (which we would also RAID) and a slave storing changes to an EBS volume, but this has some additional overhead and complexity (two servers). What I was imagining is using some form of replicated file system such that I could have a filesystem on top of a RAID0 of ephemeral volumes to maximise performance all changes from the above immediately replicated to another RAID1 volume backed by multiple EBS volumes to ensure no data loss The advantages of this would be best possible IO performance for the DB server; no network delay in IO decreased IO on EBS volumes (as all read IO will be done on the ephemeral volumes) so decreased cost good data security, as it's backed onto redundant EBS volumes However, I haven't seen an appropriate system to replicate all changes from one volume to the other; is there a filesystem, or any other approach, which will do this? The distributed file systems, e.g. GlusterFS, DRBD etc seem to focus on replicating disks between servers, can they be set up to do what I'm interested in here? I also haven't seen anything about other's taking this approach. Do I have a solution in need of a problem here (i.e. is performance good enough, so this whole idea is redundant)? Is there some flaw in the plan?

    Read the article

  • Permissions Issue with Files Generated by PerfMon

    - by SvrGuy
    We are trying to implement some data logging to CSV files using a Data Collector Set in PerfMon (on a windows Server 2008R2 system). The issue we are running into is that we (seemingly) can't control the permissions being set on the log files created by perfmon. What we want is for the log files created by perfmon to have Everyone:F permissions (Full Control for Everyone). So, we have a directory structure setup where all logs go into a folder: c:\vms\PerfMonLogs\%MACHINENAME% (e.g. c:\vms\PerfMonLogs\EvaluationG2) In the above example, c:\vms\PerfMonLogs\EvaluationG2 has permissions Everyone:F (below is the icacls for this directory) EVALUATIONG2/ Everyone:(OI)(CI)(F) NT AUTHORITY\SYSTEM:(OI)(CI)(F) BUILTIN\Administrators:(OI)(CI)(F) BUILTIN\Performance Log Users:(OI)(R) When the data collector set runs, it creates new sub folders and files within c:\vms\PerfMonLogs\EvaluationG2, e.g. (C:\vms\PerfMonLogs\EVALUATIONG2\M11d26y2012N3) Each of these directories and files has the following permissions: M11d26y2012N3 NT AUTHORITY\SYSTEM:(OI)(CI)(F) BUILTIN\Administrators:(OI)(CI)(F) BUILTIN\Performance Log Users:(OI)(R) So these new folders and not simply inheriting permissions from the parent folder (don't know why). Now, we tried adding Everyone:F using the security tab on the collector set (No dice). Any ideas? How do we control the permissions on the log files generated by perfmon data collector set?

    Read the article

  • Expand a volume residing on one X-RAID disk installed on a Netgear ReadyNas Duo v2

    - by Sid
    I've got a Netgear ReadyNas Duo v2 (2 disk slots). System is configured with X-RAID which does not provide flexibility but automatically expands based on a sort of RAID-5 logic. I had 2 500 GB hard disk installed, redundant, so I had 500 GB of volume size. I wanted to upgrade the whole system to 3 GB * 2 hard disk maintaining both the data already on the NAS and the data on one of the two 3 TB hard disks. So I did this: Unplugged one disk from the ReadyNas. Now the readynas has 1*500 GB non redundant. Plugged one empty 3 TB hard disk. Now the readynas has 1*500 GB + 1*3 TB, redundant. I waited for the resync. I then unplugged the 500 GB hard disk, so that I have only the 3 TB hard disk with the previous data. Now what I want is to copy the data on my other 3 TB hard disk in the NAS, so that I can plug this other disk in the NAS and use it for redundancy. The problem is that: the NAS has the (single) 3 TB hard disk in X-RAID, but the volume does not expand to 3 TB, it remains fixed to 500 GB. Is there a way to tell the ReadyNas to force expanding the volume to the whole disk without plugging in another hard disk of the same size?

    Read the article

  • Upgrade to Q9550 or i7 920 on a budget?

    - by evan
    I'm planning to upgrade my computer and torn between maxing out the system I have or investing in the X58 architecture. I'm currently using a E6600 Core 2 Duo with 4GB of RAM (800mhz) on an Asus PK5-E motherboard which I built two years ago. My original plan was that one day I'd upgrade machine to 8GB (1066mhz, the max the PK5-E allows) and to the Core 2 QuadQ9550 to give the machine a good four years of life. However, that was before the i7 came out. I use my computer mainly for software development , which I do inside Virtual Machines, and the i7 seems ideal for that because it no longer is limited by the speed of the FSB? And when I looked into it, getting 8GB DDR3 RAM isn't much more expensive than the 8GB of DDR2 and the i7 920 is comparable in price to the Q9550, which doesn't make much sense to me? So the question is it worth swapping the motherboard out for around $250 and upgrading all three components or using that money on SSD or 10rpm drive for the existing system's OS/Apps/Virtual Machine drive? Or just put the $250 towards a completely new machine in a year or two? Would the i7 really give that much of boost compared to the Q9550 for what I'd be using it for? Thanks in advance for your input!!!

    Read the article

  • Wipe free space on LVM-LUKS (dm-crypt) Volume

    - by peter4887
    My three partitions for my system are created with LVM on a LUKS partition (dm-crypt). These are /home, / and swap. The filesystem is ext4. They are encrypted, because they are on my laptop and I don't want that some laptop thieves get my data. But I often share my laptop with other people so they can access my encrypted partitions. I don't want that these people can recover my cache and all the data I deleted. So I'm now trying to wipe all my free space on /home to prevent against recovering with tools like photorec. (one overwrite should do, the need of multiple overwriting is just a rumor) But still I haven't found any solution to wipe this free space successfully. I tried dd if=/dev/zero of=/home/fillitup bs=512 count=[count of free sectiors] so my partition was complete full of data. df /dev/mapper/home said 100% is used and there are 0 sectors available. But I could still recover gigs of data with photorec, although I selected to recover just form the free space. photorec displays: /dev/mapper/home - 340 GB / 317 GiB (RO) , but df displays that the size of /home is just 313G, why are there these differences and what did the 340GB means? It looks like there is a place on my /dev/mapper/home partition, that I can't access to overwrite, but I can access it to recover. I also checked for corrupted sectors, but there aren't any. Maybe this is the space between my existing files? Did anyone knows why I can't wipe my free space with dd, and how I can find the location of the loads of recoverable files, to securely delete them?

    Read the article

  • Sed: Deleting all content matching a pattern

    - by Svish
    I have some plist files on mac os x that I would like to shrink. They have a lot of <dict> with <key> and values. One of these keys is a thumbnail which has a <data> value with base64 encoded binary (I think). I would like to remove this key and value. I was thinking this could maybe be done by sed, but I don't really know how to use it and it seems like sed only works on a line-by-line basis? Either way I was hoping someone could help me out. In the file I would like to delete everything that matches the following pattern or something close to that: <key>Thumbnail<\/key>[^<]*<\/data> In the file it looks like this: // Other keys and values <key>Thumbnail</key> <data> TU0AKgAAOEi25Pqx3/ip2fak0vOdzPCVxu2RweuPv+mLu+mIt+aGtuaEtOSB ... dCBBcHBsZSBDb21wdXRlciwgSW5jLiwgMjAwNQAAAAA= </data> // Other keys and values Anyone know how I could do this? Also, if there are any better tools that I can use in the terminal to do this, I would like to know about that as well :)

    Read the article

  • Hardware testing tool/suite

    - by Aviator
    Hi All, I just bought a new core i5 system (assembled) and started installing Windows 7. It was failing for many times and at some point got installed. After that, frequent crashes related to MEMORY. So checked the RAM using memtest86+ and found many errors.I got it replaced with the vendor and now if i install ANY OS, at some point in installation it either freezes completely with no response for hours, or restarts automatically. I tried installing Windows 7, Windows Vista and Ubuntu 9.10. I tested the new RAM again and found no problems in about 2 passes using memtest86+. I even updated the BIOS using bootable USB and even the problem persists. I am really not sure which hardware is causing trouble. I dont have any OS inside it, so i have to check using bootable CDs DVDs and USB only. Please advice on how to proceed. Are there any suites/ separate tools for checking integrity of each hardware parts and troubleshoot it? I wanted to confirm which part is problematic before going for replacement. Thanks a lot! This is the config: Core i5, MSI P55-GD65, GSKill 2x2GB, Seagate 500GB 7200rpm, CM Extreme 600W PSU, Saphhire Radeon 5770 1GB, LG DVD Writer

    Read the article

< Previous Page | 812 813 814 815 816 817 818 819 820 821 822 823  | Next Page >