Search Results

Search found 5942 results on 238 pages for 'total starnger'.

Page 118/238 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Exchange 2010 and DAG - all roles on both servers?

    - by Keith
    We just recently migrated to an Exchange 2010 server. Currently all of the roles and mailboxes are installed on 1 server (we are a small company with less than 100 users). I am wanting to use DAG for replication however it seems most set ups for DAG requires at least 3 or 4 total servers. Is there anyway to make this work with just two servers and both of these servers would have all the roles and mailboxes? Maybe there is a better way to do this than DAG? I'm open for suggestions. The goal here is to have some sort of replicated server so that if there is an issue with our primary Exchange server, another one can be brought up within an hour or so with all current information (not a backup). It doesn't necessarily have to be instantaneous.

    Read the article

  • Enlarge partition on SD card

    - by chenwj
    I have followed Cloning an SD card onto a larger SD card to clone a 2G SD card to a 32G SD card, and the file system is ext4. However, on the 32G SD card I only can see 2G space available. Is there a way to maximize it out? Here is the output of fdisk: Command (m for help): p Disk /dev/sdb: 32.0 GB, 32026656768 bytes 64 heads, 32 sectors/track, 30543 cylinders, total 62552064 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e015a Device Boot Start End Blocks Id System /dev/sdb1 * 32 147455 73712 c W95 FAT32 (LBA) /dev/sdb2 147456 3994623 1923584 83 Linux I want to make /dev/sdb2 use up the remaining space. I try resize2fs /dev/sdb after dd, but get message below: $ sudo resize2fs /dev/sdb resize2fs 1.42 (29-Nov-2011) resize2fs: Bad magic number in super-block while trying to open /dev/sdb Couldn't find valid filesystem superblock. Any idea on what I am doing wrong? Thanks.

    Read the article

  • How do I optimize a high traffic Wordpress website?

    - by mha
    Hello, I am running a wordpress based site which is now hosted on (mt) under DV-Extreme package 2GB+256MB addon RAM. It a muti author site where people are engaged in writing posts, comments, updating status etc. According to Google Analytics this month traffic Visitor = 45,764 Pageview = 1,051,186 Visit = 141,447 I have cdn my site, compress the css, used w3 Total cache plugin to optimize my site. Since last month I am getting several down notice from Pingdom. Right now I am facing more down alert than before. And have to restart my site several time to up again. Is my hosting resource is not enough? Do I need more resource? or what could be the solution? Helpful suggestion will be appreciated. Thanks.

    Read the article

  • Migrate 3 terabytes of files to a new server windows 2003

    - by smackaysmith
    We have a new file server to handle the obscene amount of files generated by the company (PDFs, XLS, DOCs and JPGs). Files being moved to the new server total about 3tb. The problem is we can't take the company down for days to move the files. The other problem is the applications creating all these files have to reference previous files, so we can't simply point them to the new server. Also, there isn't an option to have the applications create files on the new server, but reference the old server for existing files. The servers are x64 win2003 r2. Both servers are on the same subnet. DFS doesn't work. Is there an application that can handle this amount of data to copy the files over, throttle bandwidth, and do a 'merge'? By merge I mean constantly copying over newly created files until the two servers are synched.

    Read the article

  • Change Windows 7 Explorer's Details Pane limits

    - by Paul
    For some reason, MS decided to completely kill the status bar's functionality in Win7 (and maybe Vista, but I don't know for sure). I have tried all possible options such as Classic Shell and so on. Basically, the one thing I miss most is seeing at a glance the total size of my selected files. I know I can press Alt+Enter or whatever, but that's not the point. The point is that the so-called 'details' pane stops providing details if more than 15 files are selected! WTH? Cannot understand the reason behind such a stupid arbitrary limit, that doesn't seem to be user-configurable at all. Anyway, what I'm looking for is a way to change that limit, either via the registry or otherwise. Is this at all possible?

    Read the article

  • How can I shrink my Windows partition further than the disk management is allowing?

    - by Walkerneo
    I just bought a new computer with a 2tb hard drive that has only a single partition. I would like to divide this into at least 4 partitions, but when I try to shrink the current partition, it says the total size is 1888171 MB and that the size of available shrink space is only 939075 MB. The used disk space is at 40gb right now - why can't shrink it to somewhere around that? I read here: http://www.howtogeek.com/howto/windows-vista/working-around-windows-vistas-shrink-volume-inadequacy-problems/ that this is because of unmovable system files. I doubt this is the only problem though. I would like to get this partition down to 500gb. How can I do this?

    Read the article

  • Limiting memory usage and mimimizing swap thrashing on Unix / Linux

    - by camelccc
    I have a few machines that I machine that I use for running large numbers of jobs where I try to limit the number of jobs so as not to exceed the available RAM of the machine. Occasionally I mis-estimate how much memory some of the jobs will take, and the machine starts thrashing the swap file. I resolve this by sending the kill -s STOP to one of the jobs so that it can get swapped out. Does anyone know of a utility that will monitor a server for processes by a specific name, and then pause the one with the smallest memory footprint is the total memory consumption reaches a desired threshold so that the larger ones can run and complete with a minimum of swap file thrashing? Paused processes then need to be resumed once some existing processes have completed.

    Read the article

  • After creating a mysql user with all privileges, the user cannot create databases in phpMyAdmin and only sees information_schema table

    - by GHarping
    This is a recurring problem for some reason... Using mysql 5.5, I am simply trying to create a user that can connect to the database remotely, have access to all databases, and create databases. I have created a user using: create user 'dev'@'%' identified by 'abcdefg'; then granted all permissions using: GRANT ALL ON *.* to 'dev'@'192.168.%' IDENTIFIED BY 'abcdefg' WITH GRANT OPTION; and the result is that the user cannot create databases, and can only see information_schema database for some reason. Databases Create database: Documentation No Privileges Database Ascending information_schema Total: 1 Does anyone know why this might be happening?

    Read the article

  • How to get Apache to follow symlink instead of downloading it?

    - by user792445
    I am just using the standard apache config file which mentions that it follows symlinks, but when I hit the url http://localhost/test it downloads the symlink file instead of following it. What config do I need to change to get apache to follow the symlink instead of downloading it? This is an ls on the directory: $ ls -al total 10 drwx------+ 1 SYSTEM SYSTEM 0 Oct 20 10:55 . drwx------+ 1 SYSTEM SYSTEM 0 Aug 26 12:27 .. -rw-r--r--+ 1 me None 47 Oct 20 10:14 index.html lrwxrwxrwx 1 me None 29 Oct 19 17:10 test -> /home/me/projects/test This is in my apache config file: <Directory "D:/Program Files (x86)/Apache Software Foundation/Apache2.2/htdocs"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory>

    Read the article

  • What's faster, cp -R or unpacking tar.gz files?

    - by Buttle Butkus
    I have some tar.gz files that total many gigabytes on a CentOS system. Most of the tar.gz files are actually pretty small, but the ones with images are large. One is 7.7G, another is about 4G, and a couple around 1G. I have unpacked the files once already and now I want a second copy of all those files. I assumed that copying the unpacked files would be faster than re-unpacking them. But I started running cp -R about 10 minutes ago and so far less than 500M is copied. I feel certain that the unpacking process was faster. Am I right? And if so, why? It doesn't seem to make sense that unpacking would be faster than simply duplicating existing structures.

    Read the article

  • Updating and deleting java (red hat / centos)

    - by JochemTheSchoolKid
    I am a total noob with linux. So please explain clearly if you have a solution for me. I have an VPS and I want to update JAVA. I found a guide on the Java site which says: rpm -e < package_name I searched for the packages: [root@srv1 ~]# rpm -qa | grep java java_cup-0.10k-5.el6.x86_64 java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64 Than I tried to do the delete command [root@srv1 ~]# rpm -e java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64 error: Failed dependencies: java-gcj-compat is needed by (installed) java_cup-1:0.10k-5.el6.x86_64 java-gcj-compat >= 1.0.70 is needed by (installed) sinjdoc-0.5-9.1.el6.x86_64 What should I do now?

    Read the article

  • AWS EC2: how to compute the cost

    - by EsseTi
    i'm new to AWS, i'm using the free right not and it's terrific. Now, in 1yr the free expires. i went to the website http://aws.amazon.com/ec2/pricing/ where the pricing is but i didn't really get how to compute it. The price are in $ per Hours but i don't think that this means, if i need to have my application running 24h/365d i've to multiplay it for 8760, or do i have? because they write about usage, but how do i compute this value? if i've a website where people in total spend smt like 10 minutes a month and 1 where people spend 750hour a months i pay the same? i can't believe that is the same price. PS:if i've a scheduled task, does it affect the usage?

    Read the article

  • How to fix a folder content glitch in a FAT32 filesystem?

    - by kagali-san
    at my 450gb fat32 partition, a directory has a wrong content after improper usb drive disconnect; was: /files (total 250gb) /files/folder/ /files/folder2/ /files/somethn.gs Now: /files/weir?d?name, 5 mb Windows and Linux are saying that most (400 of 450gb) disk space is occupied, but sum of all files/dirs is about 130gb, so it seems that files are still there?. No write attempts since discovery; Rejected tools/methods: chkdsk(Windows7): checking completed, but no changes. fsck.vfat: attempted to ruin drive even more (there is a lot of LFN and unicode names). EasyRecovery. Didn't see the target folder (maybe wrong scan options? tried best match, but not raw scan - it would take days since the drive is usb 5200 rpm..).

    Read the article

  • Disk controller speed responsible for slow write speeds?

    - by vizvayu
    I have question. I'm using ESXi 4.0U1 in an IBM x3200M2 with an integrated LSI 1064e RAID controller, without any kind of cache. I have 3 250GB HOT-SWAP SATA HDs configured in RAID1E (IME). ESXi works fine, read speed are quite OK, but write speeds are incredible slow, never more than 8MB/s, and this is the best case scenario, benchmarking with iozone streaming writes, using a VMWare Paravirtual controller and with only this VM active, no swapping of any kind (total vm memory reserved). Already wrote to IBM but I don't have any kind of pay support so they didn't even answered, so I'm just wondering... anybody has any experience with a similar setup? I just want to be sure this is hardware related and can't be fixed with some kind of config option, because I'm thinking on buying a new RAID controller (Adaptec 2405 looks nice). Thanks again!

    Read the article

  • How to take backup mirror copies of C: drive?

    - by metal gear solid
    I've installed everything on my C: Drive . Whatever i need Windows 7, updated drivers and utilities and software etc i need. I now i want to take a backup mirror of everything in a DVD or i can keep backup in another USB HDD. so in case if i face any windows or hard-drive failure in future then i can restore everything as it is as all are today. I don't want to reinstall everything again Windows, Drivers all utilities and all needed soft-wares. My C: Drive's total capacity is 108 GB but data on c: drive is only 12 GB. What Should i do ? What is the best solution for me? I need free solution.

    Read the article

  • xenserver: xe command never returns?

    - by ethrbunny
    I'm trying to port a xen server 6.2 pool to a new IP address range. I've got three servers total: 2 currently at their new IP but no longer in the pool and one remaining. I'm trying to set IP address information on the two disconnected ones using the xe command and all of its variants. Oddly enough, it never returns with any values. xe host-list It just sits there until I ctrl-c it. The server is still awake and responding though. I can enter other commands (EG ifconfig) and they work fine. If I enter this same command on the remaining server in the pool it works ok. I've tried restarting the toolstack and even rebooting. No change. What am I doing wrong?

    Read the article

  • Folder sync application which can sync over Internet (the other machine specified by an IP)?

    - by Adal
    I need to sync some folders between two Win 7 machines. While they are connected to the same LAN, they can't see each-other over Windows Networking since sharing is disabled on both of them (security reasons). Do you know any sync app which can work over IP? The folder I need to sync has 500,000 files in it (80 GB in total), so the sync app should be pretty efficient. At the moment I copy the files from one machine to the other over FTP, but it takes forever, since a separate connection is opened for each file. Or maybe you know some app which can efficiently transfer a large number of files between two machines on the Internet?

    Read the article

  • Open table cache in MySQL

    - by vvanscherpenseel
    I have my open table cache set to 1800 and I have a total of 1112 tables. MySQL Tuning Primer reports that 100% of my table cache is used yet my table cache hit rate is 5%. I understand that this happens due to concurrent connections all opening tables. I think I should raise the cache limit. I understand that the cache size is limited by the file descriptor limit of my operating system, but are there any other practical limitations I should be aware of? Searching Google or this very website yields mostly posts explaining the connection-factor or come up with indecisive answers. My question: can I safely increase the open table cache limit? Is there a maximum?

    Read the article

  • Apache stops serving requests when connections increase

    - by Gunjan
    The values for MaxClients, ServerLimit etc parameters are quite high (4000). Available RAM on the server is high too (~8G). Load average remains below 1 on a 24 core CPU. But when the number of visitors on the website increase apache just stops serving requests. The apache error log is blank and access log shows no more requests coming in. Restarting apache makes it work again until the number of requests increases again. Any ideas where to start looking? UPDATE Getting the below errors in apache error log on running it with LogLevel Debug [info] server seems busy, (you may need to increase StartServers, or Min/MaxSpareServers), spawning 32 children, there are 479 idle, and 1027 total children

    Read the article

  • How can I reorder parts of a video file

    - by sandeep
    I have download a mkv movie file which gave me 3 files suffixed .001, .002, and .003. When i join them together with different tools like winrar, 7zip, hjsplit, concatenated file shows only last 40 min/1.22 hrs of the total length of the movie. If I play all the (.001, .002, .003) parts with vlc player, I can see that .003 is the first part of the video and .001 is the last part. Can anyone tell me how can join this parts of movie with correct position or how I can convert .003 file into .001 file.

    Read the article

  • Nginx issue with two web nodes

    - by HTF
    I'm running Wordpress website with Nginx and Memcached. I have simple DNS round robin balancing with A records pointing to both web servers. I've noticed the following entries in both web servers access logs: 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 I've configured W3 Total cache plugin for Wordpress - pointing to loopback address (127.0.0.1:11211) on each Wordpress installation. Is this because the webserver is trying to access content that is cached on the other web server? Shall I add IPs to W3 plugin of both web servers on each website (192.168.1.:11211, 192.168.1.2:11211)? I'm not sure if this related to Memcached or maybe some configuration issue on the server itself? Regards

    Read the article

  • Applications getting killed automatically

    - by nebi
    I am running httperf client on my m/c and after few seconds it is getting killed. dmesg shows: The command is: httperf --hog --client=0/1 --server=39.0.0.2 --port=80 --uri=/50kb --rate=20000 --send-buffer=4096 --recv-buffer=16384 --num-conns=6000000 --num-calls=1 Although I had done this test no. of times but never faced this error any time. From last two days I am observing this. My Ubuntu version is ubuntu 10.04. and httperf version is httperf-0.9.0 [ 2997.180620] Out of memory: kill process 7977 (apache2) score 70532 or a child [ 2997.180632] Killed process 7977 (apache2) [ 2997.184837] Out of memory: kill process 7971 (rsyslogd) score 8702 or a child [ 2997.184844] Killed process 7971 (rsyslogd) [ 2997.188823] Out of memory: kill process 7978 (apache2) score 1354 or a child [ 2997.188829] Killed process 7978 (apache2) [ 2997.192817] Out of memory: kill process 7973 (atd) score 561 or a child [ 2997.192822] Killed process 7973 (atd) [ 2997.196805] Out of memory: kill process 8102 (httperf) score 471 or a child [ 2997.196811] Killed process 8102 (httperf) Output of free command: total used free shared buffers cached Mem: 3862768 163000 3699768 0 2384 13068 -/+ buffers/cache: 147548 3715220 Swap: 3905528 0 3905528

    Read the article

  • Show symbolic links AND their targets in web directory listing (apache)

    - by Erwan Queffélec
    Listing a directory content with ls -l shows this output: total 12 drwxr-xr-x 3 root root 4096 Dec 11 16:38 2.3 drwxr-xr-x 5 root root 4096 Dec 11 16:38 2.4 drwxr-xr-x 2 root root 4096 Dec 11 16:38 archive lrwxrwxrwx 1 root root 10 Dec 11 16:38 current -> 2.4/2.4.1/ lrwxrwxrwx 1 root root 10 Dec 11 16:38 next -> 2.4/2.4.2/ lrwxrwxrwx 1 root root 10 Dec 11 16:38 previous -> 2.4/2.4.0/ Notice how it shows the symbolic links and their respective targets. I need to know if there is a way of getting the same behaviour in apache directory browsing. If apache is not capable of it as I suspect, is there an application (FLOSS) providing that kind of behaviour ?

    Read the article

  • How to use value from primary accessdatasource control as parameter in select query for secondary ac

    - by weedave
    Hi, I'm trying to display all orders placed and I have a primary accessdatasource control that has a select query to get the customer information and the orderID. I want to use the orderID value from this first query as a parameter for the secondary accessdatasource control that selects the product information of the products in the order. In plain english, I want to:- select product info from product table where orderID = ? (where ? is the orderID value from the first query) I have tried the <%#Eval("OrderID")% but I get a "server tag not well formed" error, but I do get results returned when I just type the order ID in, but obviously every result (order) just contains the same product info... <asp:Repeater ID="Repeater1" runat="server" DataSourceID="AccessDataSource1"> <ItemTemplate> <asp:AccessDataSource ID="AccessDataSource2" runat="server" DataFile="~/App_Data/project.mdb" SelectCommand="SELECT orderDetails.OrderID, album.Artist, album.Album, album.Cost, album.ImageURL, orderDetails.Quantity, orderDetails.Total FROM (album INNER JOIN orderDetails ON album.AlbumID = orderDetails.AlbumID) WHERE (orderDetails.OrderID = ? )"> <SelectParameters> // Error is on this line <asp:Parameter Name="OrderID" DefaultValue="<%#Eval ("OrderID")%>" /> </SelectParameters> </asp:AccessDataSource> <div class="viewAllOrdersOrderArea"> <div class="viewAllOrdersOrderSummary"> <p><b>Order ID: </b><%#Eval("OrderID")%></p> <h4>Shipping Details</h4> <p><b>Shipping Address: </b><%#Eval("ShippingName")%>, <%#Eval("ShippingAddress")%>, <%#Eval("ShippingTown")%>, <%#Eval("ShippingPostcode")%></p> <h4>Payment Details</h4> <p><b>Cardholder's Address: </b><%#Eval("CardHolder")%>, <%#Eval("BillingAddress")%>, <%#Eval("BillingTown")%>, <%#Eval("BillingPostcode")%></p> <p><b>Payment Method: </b><%#Eval("CardType")%></p> <p><b>Card Number: </b><%#Eval("CardNumber")%></p> <p><b>Start Date: </b><%#Eval("StartDate")%>, Expiry Date: <%#Eval("ExpiryDate")%></p> <p><b>Security Digits: </b><%#Eval("SecurityDigits")%></p> <h4>Ordered items:</h4> <asp:Repeater ID="Repeater2" runat="server" DataSourceID="AccessDataSource2"> <ItemTemplate> <div style="display: block; float: left;"> <div class="viewAllOrdersProductImage"> <img width="70px" height="70px" alt="<%# Eval("Artist") %> - <%# Eval("Album") %>" src="assets/images/thumbs/<%# Eval("ImageURL") %>" /> </div> <div style="display:block; float:left; padding-top:15px; padding-right:20px;"><p><b><%# Eval("Artist") %> - <%# Eval("Album") %></b></p> <p>£<%# Eval("Cost") %> x <%# Eval("Quantity") %> = £<%#Eval("Total")%></p></div> </div> </ItemTemplate> </asp:Repeater> </div> </div> </ItemTemplate> </asp:Repeater>

    Read the article

  • vSwitch configuration with 12 uplinks

    - by Joshua
    I have been doing a lot of research on vSwitch configurations, but I think I am more confused now after all of the reading that I have done. So here is my situation 3 ESX Hosts (12 nics each), 1 iSCSI SAN, 2 Force 10 switches. Should I create individual vSwitches for MGMT, vMotion, VM, and SCSI traffic? or do I need to group anything together in the same vSwitch? I am going to have 4 vLANS total, one for each of those items, do I need to do any trunking on the physical switch or just assign the correct vLAN to each physical switch port?

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >