Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 1028/1338 | < Previous Page | 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035  | Next Page >

  • Best practice for Exchange 2010 HA topology considering 6 x Exchange licenses and TMG 2010

    - by MadBoy
    What would be best topology considering that: 6 x Exchange 2010 Standard Licenses 2 x Separate locations that are supposed to support redundancy in case of link problems 4 x Forefront TMG 2010 with Forefront Security and Forefront Protection/Security Multiple locations worldwide using those Exchange. Most locations will be connected with VPN Tunnel (the ones hosting Exchange for sure). I was thinking something like this: Location MAIN (about 70-100 people): 2x TMG 2010 in NLB 1x Exchange 2010 CAS/HUB Role 2x Exchange 2010 Mailbox Role (Active + Passive) Location SUPPORT (about 20 people): 2x TMG 2010 in NLB 1x Exchange 2010 CAS/HUB Role 2x Exchange 2010 Mailbox Role (Active + Passive) Management wants to make sure that in case of problems in main location (power failure, link loss etc) second location can support all traffic from around the world and vice-versa. We have 6-7 locations and more comming up (not big ones but like 10+ people per each location). I do know that CAS/HUB is single point of failure (and no NLB), but i simply lack more licenses to do some redundancy on that. What do you think about this approach? What would be better approach according to you?

    Read the article

  • Using UDF on a USB flash drive

    - by CesarB
    After failing to copy a file bigger than 4G to my 8G USB flash drive, I formatted it as ext3. While this is working fine for me so far, it will cause problems if I want to use it to copy files to someone which does not use Linux. I am thinking of formatting it as UDF instead, which I hope would allow it to be read (and possibly even written) on the three most popular operating systems (Windows, MacOS, and Linux), without having to install any extra drivers. However, from what I found on the web already, there seem to be several small gotchas related to which parameters are used to create the filesystem, which can reduce the compability (but most of the pages I found are about optical media, not USB flash drives). I would like to know: Which utility should I use to create the filesystem? (So far I have found mkudffs and genisoimage, and mkudffs seems the best option.) Which parameters should I use with the chosen utility for maximum compability? How compatible with the most common versions of these three operating systems UDF actually is? Is using UDF actually the best idea? Is there another filesystem which would have better compatibility, with no problematic restrictions like the FAT32 4G file size limit, and without having to install special drivers in every single computer which touches it?

    Read the article

  • How to set up a centralized backup server with lots of offsite workstations, intermittent internet connectivity, and stubborn users?

    - by Zac B
    This might be an impossible question. Context: We have a bunch of computers across around 1000 users. We have a centralized office where 900 of the users work, most of the time. Most of the computers are laptops. They are very frequently coming on and off the network for hours at a time. Users often take their computers home and do lots of work from home. In addition, there are a handful of users who work elsewhere in the country, who are offline (no internet connection whatsoever) for more than half of the time they use their machines. All of the machines are Windows 7/XP. Problem: People are always losing data. One day someone accidentally deletes a bunch of files. The next day someone else installs a bad driver or tries to mess with something in system32 and needs a personal data backup/reinstall of Windows. Because of how many of our business operations are done without an internet connection, and how frequently computers come on- and offline, it's unfeasible to make users use network storage for all of their data. We tried giving them Dropboxes, and they stored their files elsewhere. We bought and deployed Altiris, and they uninstalled it and blamed us when they couldn't get files back that they accidentally deleted while they were offline and hadn't taken a backup in months. We tried teaching them backup best-practices, and using scheduled sync tools to upload things to the network drives, and they turned them off because they "looked like viruses". It doesn't help that many of these users are pretty high up in the business and are not amicable to any sort of "you need to do something regularly because we say so" solution. Question: Other than finding another job where IT is treated differently and users are willing to follow best practices, how would people recommend I implement a file backup solution that supports the following: Backs up to a centralized server over LAN or WAN whenever a network link becomes available, or on a schedule. Supports interrupted/resumed backups (and hopefully file-delta only backups), since connections to the network (WAN or LAN) are often slow and only open for half an hour or so. Supports relatively rapid, "I accidentally deleted the TPS reports! Oh no!" single-file recovery, ideally administered from the central backup server rather than the client PC. Supports local-to-local file delta backup on a schedule, so that users without a network connection for a few days can still retrieve accidental deletions or whatnot. Ideally, the local stored backups would be pushed up to the server whenever network link is available. Isn't configurable on the clients without certain credentials. Because the CFOs (who won't give up their admin rights on the domain) will disable it if they can. Backs up the entire hard drive. There are people who are self-righteous about storing things in C:\, or in the recycle bin, or in the C:\Windows dir (yes, I know). I'm fine integrating multiple products/solutions, or scripting different programs together myself (I'm a somewhat competent programmer), but I've been drawing a blank on where to start. Dropbox is folder-specific, Altiris doesn't cope with LAN outages or interrupted/resumed backups, Volume Shadow Copy is awesome for a local-to-local solution, but I don't know how to push days of stored shadow copies up to a server in a 2 hour window of network access. The company is fine with spending decent money on this, thousands (USD) on a server, and hundreds on clients, if necessary. I want to emphasize that this isn't a shopping list request. While I wish there was a program out there that did what I want, I've looked pretty hard, and not found anything that fits the bill. Instead, I'm hoping for ideas on where to start hacking things together from scratch/from different technologies to make something stable that works. Cheers!

    Read the article

  • HAProxy "503 Service Unavailable" for webserver running on a KVM virtual machine

    - by Menda
    I'm setting up a server with KVM (IP 192.168.0.100) and I have created inside of it one virtual machine using network bridging at 192.168.0.194. This virtual machine has an nginx instance running, which I can access from the server or from any computer computer in the internal network just typing in the browser http://192.168.0.194. However, I try configure HAProxy in the same server that hosts KVM and looking the status page of HAProxy it always shows the virtual machine as "DOWN". If I try from the server http://localhost, it should be the same than if I go to http://192.168.0.194. My goal is to build a reverse proxy, but I tried this little example and won't work. What am I doing bad? This is my config file in the server: # /etc/haproxy/haproxy.cfg global maxconn 4096 user haproxy group haproxy daemon defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen ServerStatus *:8081 mode http stats enable stats auth haproxy:haproxy listen Server *:80 mode http balance roundrobin cookie JSESSIONID prefix option httpclose option forwardfor option httpchk HEAD /check.txt HTTP/1.0 server mv1 192.168.0.194:80 cookie A check Thanks.

    Read the article

  • SCCM 2012 Clients no longer detecting

    - by user3685428
    Here is the scenario I had a fully functioning SCCM 2012 site server with the DP, MP, SUP, Application catalog, etc. roles configured and working. There is only one server on this site. Everything was great but i was not happy with SUP, so i decided to create a separate WSUS server and configure Windows Updates through GPOs. That setup worked great as well so i went ahead and removed the SUP role from SCCM and removed the WSUS feature from my SCCM server (they were configured on the same SCCM Server). I did not notice any problems right away. A couple days later i noticed that the OSD deployments were giving errors, and after a couple hours of trying suggestions from Google, i was able to uninstall PXE and make a few changes and reinstall with WDS to get it working again. Again, thought everything was fine and continued on. The last couple days i have noticed that any new machine deployed or installing the Client will show in the SCCM console as "No" Client. The client machines will show connected to a site but the software center shows "IT Organization" instead of our site like the previous clients. The existing clients all seem to be functioning normally. they still receive application distributions and configuration baselines, etc. Reinstalling, uninstalling and reinstalling, repairing does not fix the problems and this happens on all new clients. ClientLocation.log shows it connecting to the correct MP. Nothing odd in any of the logs except for the ClientMessaging.log which repeats continuously this line: <![LOG[Raising event: instance of CCM_CcmHttp_Status { ClientID = "GUID:0450fde3-ab82-41bf-9c33-87a18113744b"; DateTime = "20140528214824.993000+000"; HostName = "SOUNDWAVE.domain.org"; HRESULT = "0x00000000"; ProcessID = 4092; StatusCode = 0; ThreadID = 3720; }; ]LOG]!><time="16:48:24.994+300" date="05-28-2014" component="CcmMessaging" context="" type="1" thread="3720" file="event.cpp:706"> thanks

    Read the article

  • Tell postfix to merge three Authentication-Results:-Lines into one?

    - by Peter
    I am running a postfix mta with debian wheezy. I am using postfix-policyd-spf-python, openkdim and opendmarc. When receiving e-mails from google (google apps with own domain) for example, the header looks like this: [...] Authentication-Results: mail.xx.de; dkim=pass reason="1024-bit key; insecure key" header.d=yyy.com [email protected] header.b=OswLe0N+; dkim-adsp=pass; dkim-atps=neutral<br> [...] Authentication-Results: mail.xx.de; spf=pass (sender SPF authorized) smtp.mailfrom=yyy.com (client-ip=2a00:1450:400c:c00::242; helo=mail-wg0-x242.google.com; [email protected]; [email protected]) [...] Authentication-Results: mail.xx.de; dmarc=pass header.from=yyy.com<br> [...] This means any of these programs creates it's own Authentication-Results:-Line. Is it possible to tell postfix to merge this into one single Authentication-Results:-Line? When I send an e-mail to google, it says: [...] Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates xxx.xxx.xxx.xxx as permitted sender) [email protected]; dkim=pass [email protected]; dmarc=pass (p=NONE dis=NONE) header.from=xxx.com [...] And this is exactly what I want. Just one Authentication-Results-Header. How can I do this? Thanks. Regards, Peter

    Read the article

  • ntpd on Fedora Core 6 with high negative time reset values

    - by Mark White
    The basic problem is we have a FC6 server instance running on a virtual machine, and the system time seems to have been slowly varying until it is now causing a problem. The server runs 24/7 and has been up for 155 days. It has been changed to show GMT, and reports the time as (example) 00:15:15 GMT whereas the actual time is 00:00:00 GMT. This is an offset of 915 seconds. selinux has been changed to 'setenforce 0' for testing and I am running as root. I stop the ntpd service and change the time in System|Administration|Date & Time. The time still shows the same with 'date' in bash. There are no error logs. I change the date with 'date --set' in bash. The response confirms the changed date. I run 'date' and the incorrect date is shown. There are no error logs. I start the ntpd service and /var/log/messages shows success with 'time reset -915.720139s'. The date remains unchanged. ntpq -p shows three three time servers all have offsets of around -915 seconds. I stop ntpd service and try 'ntpd -gqx' and get the same result as above - success, but a large negative time reset. I've tried varying combinations of the above, and a few more settings in System|Administration|Date & Time - no change. I just need to reset the system time to GMT. No offset. But I can't wait for ntpd to slew the time over the next few weeks. Any advice is welcome, cheers! Surely this shouldn't be this difficult... Mark...

    Read the article

  • DNS, subdomain, and IPv6 -- possible to add subdomain.example.com NS record to an IPv6 host?

    - by mpbloch
    example.com is listed with a registrar -- specifically, answerable.com. I want to host a subdomain in-house, specifically home.example.com. I am using an ipv6 gateway, specifically gogo6, to have a public IPv6 address. The IP address looks like 2001:xxxx:xx47. Then http://[2001:xxxx:xx47] goes to my test site (an instance of IIS7). I can add a quad-A record for my primary site -- home.example.com AAAA 2001:xxxx:xx47. Then http//home.example.com loads correctly. Must I add an A or quad-A record for all sub.home.example.com to my answerable.com DNS manager for example.com? Or can I delegate DNS queries to *.home.example.com to the machine at [2001:xxxx:xx47]? I have tried to add a AAAA record for tunnel.example.com to [2001:xxxx:xx47], and then add an NS entry for home.example.com to tunnel.example.com, but browsing then results in "DNS lookup error" from my browser. Is this a configurable scenario? Can DNS for subdomain only be delegated to IPv4 addresses?

    Read the article

  • Open Source Chef Server can't upload cookbook

    - by veilig
    I just setup the open source chef server on an Ubuntu 12.04 EC2 instance, I've setup my webui and am able to get responses from my knife commands ie: knife node list, knife client list, knife user list, etc... I'm able to update roles, databags, environments, etc... but I cannot upload any cookbooks. I'm running my workstation on Mac OSX. I keep getting this output at the end of my command knife cookbook upload -VV curl. Doesn't matter what cookbook I upload, or if I upload them all - I keep getting the same response DEBUG: Chef::HTTP calling Chef::HTTP::ValidateContentLength#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::RemoteRequestID#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::Authenticator#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::Decompressor#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::CookieManager#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::JSONToModelOutput#handle_response /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http/json_output.rb:51:in `handle_response': undefined method `chomp' for nil:NilClass (NoMethodError) from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:229:in `block in apply_response_middleware' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `each' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `inject' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `apply_response_middleware' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:144:in `request' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:118:in `put' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/cookbook_uploader.rb:123:in `block in uploader_function_for' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:52:in `call' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:52:in `block (3 levels) in process' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:50:in `loop' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:50:in `block (2 levels) in process'INFO: HTTP Request Returned 204 No Content:

    Read the article

  • Hardware, network infrastructure for runnng gaming server nd on VirtualGL

    - by archer
    Foud nice project VirtualGL (http://www.virtualgl.org/). Tried to run 3D fames (EVE Online, Prototype) on server and display the output on thin client using 100Mbps network. Server: Gentoo Linux on AMD Phoenom II x6 3.4Gz, 8GB RAM, 2x NVIDIA 9800 GTX in single session with display resulution 1024x768 on client. Performance is very promising. Going to increase network speed to 1Gbps (using either Ethernet or Fiber) and run 5-6 clients simultenously. My questions are: a) what would be better for network - 1Gbps Ethernet or Fiber (clients are distributed in max 20m around server)? Is that a must to use managed switch for better network performance? b) Should I increase number of video cards to put in SLI on server (going to use Gigabyte GA-890FXA-UD7 which has 6 PCIExpress slots [2 x4, 2 x8 and 2 x16]). Will it impact performance significantly. If I need to increase the number of video cards - what would be better - put 2 banks of video cards with 3 in bank using SLI, or 3 banks with 2 in the bank? Would linux recognize that and properly use all banks of video cards? c) any suggestions on good thin clients supporting 1920x1080 HDMI video and 1Gbps network I understand that my questions can't be answered clearly (unless someone already managed to use this kind of stuff ;)) although any suggestions would be very helpful.

    Read the article

  • Unable to ping domain.local, but can ping server.domain.local

    - by Force Flow
    I have a single windows 2008 server running active directory, group policy, and DNS. DHCP is running from the firewall (this is because there are multiple branch locations, and each location has its own firewall supplying DHCP. But, for this problem, the server and workstation are at the same location). On an XP workstation, if I try to visit \\domain.local or ping domain.local, the workstation can't find it. A ping returns Ping request could not find host domain.local. If I try to visit \\server or \\server.domain.local or ping server or server.domain.local, I'm able to connect normally. If I ping or visit domain.local on the server, I'm able to connect normally. A-Records are in place in the DNS service for server, domain.local, and server.domain.local. A reverse lookup zone also is enabled and PTR records are in place. If I wait 20-30 minutes, I am eventually able to ping and visit domain.local--but, when attempting to ping, it takes 30 second to return an IP address. I am also unable to join a new workstation to the domain during this wait period. If I try, the error message returned is "network path not found". Is there something I'm missing?

    Read the article

  • Cannot Start MySQL Server on Fresh MAMP Install

    - by alexpelan
    I'm using Mac OS X 10.6.2 on my Macbook Pro. I can get the apache server to start, but not the mysql server, on both the default apache and default MAMP ports. When I try to go to my start page, I get the message "Error: Could not connect to MySQL server!" . Here's what's in my mysql error log: 00513 02:00:07 mysqld_safe mysqld from pid file /Applications/MAMP/tmp/mysql/mysql.pid ended 100513 02:00:16 mysqld_safe Starting mysqld daemon with databases from /Applications/MAMP/db/mysql 100513 2:00:16 [Warning] The syntax '--log_slow_queries' is deprecated and will be removed in a future release. Please use '--slow_query_log'/'--slow_query_log_file' instead. 100513 2:00:16 [Warning] You have forced lower_case_table_names to 0 through a command-line option, even though your file system '/Applications/MAMP/db/mysql/' is case insensitive. This means that you can corrupt a MyISAM table by accessing it with different cases. You should consider changing lower_case_table_names to 1 or 2 100513 2:00:16 [Warning] One can only use the --user switch if running as root 100513 2:00:16 [Note] Plugin 'FEDERATED' is disabled. 100513 2:00:16 [Note] Plugin 'ndbcluster' is disabled. InnoDB: Error: log file /usr/local/mysql/data/ib_logfile0 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 16777216 bytes! 100513 2:00:16 [ERROR] Plugin 'InnoDB' init function returned error. 100513 2:00:16 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100513 2:00:16 [ERROR] /Applications/MAMP/Library/libexec/mysqld: unknown option '--skip-bdb' 100513 2:00:16 [ERROR] Aborting 100513 2:00:16 [Note] /Applications/MAMP/Library/libexec/mysqld: Shutdown complete 100513 02:00:16 mysqld_safe mysqld from pid file /Applications/MAMP/tmp/mysql/mysql.pid ended A couple of things: 1) There are a bunch of different .cnf files that come with MAMP (my-huge, my-medium, etc.)...how can I tell which one is actually being used? 2) I deleted the ib_logfile0 and ib_logfile1 as recommended by another post on serverfault, and then ended up with more errors: 100519 16:01:30 InnoDB: Log file /usr/local/mysql/data/ib_logfile0 did not exist: new to be created InnoDB: Setting log file /usr/local/mysql/data/ib_logfile0 size to 16 MB InnoDB: Database physically writes the file full: wait... 100519 16:01:30 InnoDB: Log file /usr/local/mysql/data/ib_logfile1 did not exist: new to be created InnoDB: Setting log file /usr/local/mysql/data/ib_logfile1 size to 16 MB InnoDB: Database physically writes the file full: wait... InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100519 16:01:31 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 100519 16:01:31 InnoDB: Started; log sequence number 0 44556 100519 16:01:31 [ERROR] /Applications/MAMP/Library/libexec/mysqld: unknown option '--skip-bdb' 100519 16:01:31 [ERROR] Aborting And then I got this the next time I tried to run it: InnoDB: Unable to lock /usr/local/mysql/data/ibdata1, error: 35 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. Sorry that this is a lot of information, but I don't want to leave anything out. Thanks.

    Read the article

  • Citrix Metaframe/RD - screen refresh weirdness

    - by southof40
    I access a clients W2003 machine (XEN Virtualization) using RD over Citrix Metaframe. Everything used to be fine. Some weeks ago things turned bad ! All is well initially but after, say, 5 minutes the screen will stop refreshing. Rather weirdly you can then still proceed in a way as you can make the screen refresh by getting the RD window to go through a restore/maximise cycle (this is only possible using the ALT-BREAK shortcut as everything else is locked up). This then allows you to proceed by typing something and going ALT-BREAK to see the results. Using menus is just not possible at all. There's some indications that clearing the java cache between sessions helps. Also that the lockup happens more quickly if you make the 'lots of stuff happen' on the screen - for instance if you do a directory listing of a big directory then often that will cause the lockup to occur. Similary opening a dense Excel workbook and then scrolling it will cause the lockup to occur. Any Metaframe veterans out there who recognise these symptoms ? I'd be very grateful as it's driving me nuts.

    Read the article

  • Apache2 Segmentation fault with wsgi_module

    - by a coder
    Apache 2.2.3 is running as an existing web server under RHEL 5. Attempting to set up Trac using wsgi_module. RHEL 5 ships with python 2.4, so in order to use the current version of Trac (1.0) I needed to install it with easy_install-2.6. Trac works with the default mod_python, however users strongly encourage not using this module as it is officially dead. Using RHEL's package manager, I downloaded/installed python26-mod_wsgi.so. I backed up the httpd.conf, then made the following additions: LoadModule wsgi_module modules/python26-mod_wsgi.so #...# WSGIScriptAlias /trac /www/virtualhosts/trac/deploy/cgi-bin/trac.wsgi <Directory /www/virtualhosts/trac/deploy/cgi-bin> WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> Next I moved trac.conf to trac.conf.bak (contains mod_python calls). I tested the configuration using: apachectl configtest Syntax is OK. So I reloaded the server config using: service httpd reload At this time, all virtualhosted sites stopped responding. I restored my backup copy of httpd.conf, reloaded the server config, and the virtualhosted sites are being served again. A quick look at the httpd error_log shows: [Mon Oct 08 10:20:04 2012] [info] mod_wsgi (pid=28282): Initializing Python. [Mon Oct 08 10:20:04 2012] [info] mod_wsgi (pid=28280): Attach interpreter ''. [Mon Oct 08 10:20:04 2012] [debug] proxy_util.c(1817): proxy: grabbed scoreboard slot 0 in child 28283 for worker proxy:reverse [Mon Oct 08 10:20:04 2012] [debug] proxy_util.c(1836): proxy: worker proxy:reverse already initialized [Mon Oct 08 10:20:04 2012] [debug] proxy_util.c(1930): proxy: initialized single connection worker 0 in child 28283 for (*) [Mon Oct 08 10:20:04 2012] [info] mod_wsgi (pid=28283): Initializing Python. [Mon Oct 08 10:20:04 2012] [notice] child pid 28249 exit signal Segmentation fault (11) [Mon Oct 08 10:20:04 2012] [notice] child pid 28250 exit signal Segmentation fault (11) [Mon Oct 08 10:20:04 2012] [notice] child pid 28251 exit signal Segmentation fault (11) There are many similar lines, this is just a snip of the log file. Suggestions on what could be going on to cause the Segmentation faults?

    Read the article

  • ESXi - change to thin - virtual disk filesize is the same

    - by sven
    running ESXi 5.5 here with a datastore on a single SSD. Now, I thought about changing to thin disks from thick and found that I could use a tool on the ESXi host to do that. However, the file size of the new created virtual disk is not changing. I run: vmkfstools -i loader.vmdk -d 'thin' thinloader.vmdk Destination disk format: VMFS thin-provisioned Cloning disk 'loader.vmdk'... Clone: 100% done. After that I compared the virtual disksizes: ls -la *.vmdk -rw------- 1 root root 32212254720 Jun 10 08:25 loader-flat.vmdk -rw------- 1 root root 467 May 21 17:04 loader.vmdk -rw------- 1 root root 32212254720 Jun 10 08:27 thinloader-flat.vmdk -rw------- 1 root root 520 Jun 10 08:33 thinloader.vmdk Stats on the original file: stat loader.vmdk File: loader.vmdk Size: 467 Blocks: 0 IO Block: 131072 regular file Device: 8bf64d175e27544ch/10085333178302026828d Inode: 419443780 Links: 1 Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-01-25 10:17:34.000000000 Modify: 2014-05-21 17:04:06.000000000 Change: 2014-05-21 17:04:06.000000000 and on the thin file: stat thinloader.vmdk File: thinloader.vmdk Size: 520 Blocks: 0 IO Block: 131072 regular file Device: 8bf64d175e27544ch/10085333178302026828d Inode: 432026692 Links: 1 Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2014-06-10 08:27:45.000000000 Modify: 2014-06-10 08:33:30.000000000 Change: 2014-06-10 08:33:30.000000000 Anyone an idea why the disk is not providing any more space (tried with multiple VM's already - all the same)? Also, I have noticed that the newly created file "autoappend" "-flat" to the disk ... Thanks Sven Update - diff of the vmdk config* --- loader.vmdk +++ thinloader.vmdk @@ -7,15 +7,17 @@ createType="vmfs" -RW 62914560 VMFS "loader-flat.vmdk" +RW 62914560 VMFS "thinloader-flat.vmdk" ddb.adapterType = "lsilogic" +ddb.deletable = "true" ddb.geometry.cylinders = "3916" ddb.geometry.heads = "255" ddb.geometry.sectors = "63" ddb.longContentID = "6d95855805dfa0079327dfee29b48dca" -ddb.uuid = "60 00 C2 98 d5 7d 17 bf-ac 54 70 b1 2d 39 43 d5" +ddb.thinProvisioned = "1" +ddb.uuid = "60 00 C2 93 c4 13 6c cf-bb 7b 34 c9 2c b4 dc 1e" ddb.virtualHWVersion = "8"

    Read the article

  • Why am I seeing MailSlot Browse messages on unrouted ports of my Linux box?

    - by nmichaels
    I have a Linux box (Debian squeeze) with several NICs. The ones of interest are: eth3 - my main link to the network (dhcp on 10.20.30.0/24) eth0 - the first connection to my test network (static: 192.168.1.2) eth4 - the second connection to my test network (static: 192.168.1.1) My routing table looks like this: $ sudo route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.20.30.0 * 255.255.255.0 U 0 0 0 eth3 default 10.20.30.254 0.0.0.0 UG 0 0 0 eth3 I have the 2 test net ports connected to each other with a crossover cable and an instance of wireshark running on each port. Every once in a while, I'll see a packet like the following show up. Who could be doing this, and how do I convince them to stop? I do have Samba running on the machine (for a cifs mount) but don't see why it would be sending packets out to unrouted ports. I had a Windows VM running in VMWare Client and thought that might be causing it, but it still happens without it. What I want is totally silent interfaces so I can run some tests with Scapy over them.

    Read the article

  • Talk on multiple IRC channels at once?

    - by TwoPixelGrid
    I seem to remember, back in '91 or so, that the console-based IRCII implemention on the Solaris box that first got me on the net would let me /Join multiple channels on a given network such that, as new channels were joined, they would start scrolling to the single console view. Let's call it the 'interleaved conversation' chat paradigm. Am I rembering this correctly? More importantly, is there a modern way of doing this in any of the GUI-based clients? I'm surprised this isn't a common desire/feature because I think it would greatly improve the experience, especially on channels with high SNR. For example, If I'm working on a project I may connect to Freenode and join : #Qt,#OpenGL,#C++. As it is now, with mIRC,Xchat, I have to manually flip between pages just to see whats being said and to reply. What I envision would go more like this (using only 2 channels for simplicity) /join #QT #OpenGL < [QT] QtChannelUser: Hello TwoPixelGrid. < [OpenGL] OpenGLChannelUser: Hi there TwoPixelGrid. @QT: Hi QtChannelUser @OpenGL: Hello againOpenGLChannelUser And this message is going out to all my channels. Do I have to write a new client or is this already out there?

    Read the article

  • Using git through cygwin on windows 8

    - by 9point6
    I've got a windows 8 dev preview (not sure if it's relevant, but I never had this hassle on w7) machine and I'm trying to clone a git repo from github. The problem is that my ~/.ssh/id_rsa has 440 permissions and it needs to be 400. I've tried chmodding it but the any changes on the user permissions gets reflected in the group permissions (i.e. chmod 600 results in 660, etc). This appears to be constant throughout any file in the whole filesystem. I've tried messing with the ACLs but to no avail (full control on my user and deny everyone resulted in 000) here's a few outputs to help: $ git clone [removed] Cloning into [removed]... @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Permissions 0660 for '/home/john/.ssh/id_rsa' are too open. It is required that your private key files are NOT accessible by others. This private key will be ignored. bad permissions: ignore key: /home/john/.ssh/id_rsa Permission denied (publickey). fatal: The remote end hung up unexpectedly $ ll ~/.ssh total 6 -r--r----- 1 john None 1675 Nov 30 19:15 id_rsa -rw-rw---- 1 john None 411 Nov 30 19:15 id_rsa.pub -rw-rw-r-- 1 john None 407 Nov 30 18:43 known_hosts $ chmod -v 400 ~/.ssh/id_rsa mode of `/home/john/.ssh/id_rsa' changed from 0440 (r--r-----) to 0400 (r--------) $ ll ~/.ssh total 6 -r--r----- 1 john None 1675 Nov 30 19:15 id_rsa -rw-rw---- 1 john None 411 Nov 30 19:15 id_rsa.pub -rw-rw-r-- 1 john None 407 Nov 30 18:43 known_hosts $ set | grep CYGWIN CYGWIN='sbmntsec ntsec server ntea' I realize I could use msysgit or something, but I'd prefer to be able to do everything from a single terminal Edit: Msysgit doesn't work either for the same reasons

    Read the article

  • How to setup an IPSec / GRE tunnel on Windows Server 2008

    - by qbeuek
    I have a Windows Server 2008 that has a single network interface configured with a public IP address. My business partner has a private network. From my server, I need to access all the devices on his private network, and those devices must be able to access my server. My business partner has a standard solution for these requirements. They will setup an IPSec + GRE tunnel to my server. They told me, that I will need an additional public IP address for this to work. If it really is necessary, there is no problem, I can get an additional public IP address, although it will be assigned to the same physical network interface. I assume that on my server I will have both public IP addresses and also the private IP address from the tunnel (the same that is visible for the devices inside the private network). What alternatives do I have? Is it possible to configure this tunnel on my Windows Server 2008? Can it be done using only Windows tools, or do I need an additional free / commercial VPN software? If it cannot be done directly on Windows, can I setup an additional virtual machine running Linux, that will handle the IPSec + GRE tasks? How to do it? If it cannot be done on a virtual linux box, will I have to buy and setup a Cisco router to handle the IPSec + GRE tasks? Thanks for your opinions. I'm watching this question to clarify any issues or questions.

    Read the article

  • Spotlight actually searching every file on "This Mac"

    - by Cawas
    I know of 2 ways to search for any file in your machine using Finder (some say it's Spotlight) and no Terminal. To prevent answers / comments about Terminal, I consider it either for scripting something or as last resource. It's not practical for lots of usages. For instance, if you want to find something to attach to a mail, or embed in iTunes or any other app, you can just drag n' drop one or many of them. Definitely not practical to do under Terminal. There are many cases of use for any, but the focus here is Graphical User Interface. Well, the 2 ways basically are: Press Cmd + Opt + Spacebar and type in your search. Press the + button, select "System files" and "are included". This is so far my preferred way, but I'm not sure it will go through every file. Open Finder, press Cmd + Shift + G and/or select just one folder. Type in your search and select the folder rather than "This Mac". This will bring files not shown in "This Mac" if you select a folder outside of the default scope. Thing is, none of those is really convenient or have the nice presentation from regular Spotlight, which you get from Cmd + Spacebar and just typing. And, as far as I've heard, the default behavior on Spotlight in Tiger was actually being able to find files anywhere. So, is there any way to make the process significantly simpler? Maybe some tweak, configuration or really good Spotlight alternative? I'd rather keep it simple and tweak Spotlight.

    Read the article

  • apache with php fastcgi keeps going down

    - by Josh Nankin
    I have an apache2 server configured with MPM worker and php fast cgi. Lately the apache logs have been telling me that MaxClients is being reached frequently, even though it's already pretty high. My server is now constantly going down, and I see a bunch of lines like this in the log: [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: comm with (dynamic) server "/var/local/fcgi/php-cgi-wrapper.fcgi" aborted: (first read) idle timeout (20 sec) [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: incomplete headers (0 bytes) received from server "/var/local/fcgi/php-cgi-wrapper.fcgi" I can see that my php-cgi processes are pretty large (about 70mb on average). Here's my apache configuration for MPM worker: KeepAlive ON KeepAliveTimeout 2 <IfModule mpm_worker_module> StartServers 5 MinSpareThreads 10 MaxSpareThreads 10 ThreadLimit 64 ThreadsPerChild 10 MaxClients 20 MaxRequestsPerChild 2000 </IfModule> Heres my fastcgi apache configuration: <IfModule mod_fastcgi.c> # One shared PHP-managed fastcgi for all sites Alias /fcgi /var/local/fcgi # IMPORTANT: without this we get more than one instance # of our wrapper, which itself spawns 20 PHP processes, so # that would be Bad (tm) FastCgiConfig -idle-timeout 20 -maxClassProcesses 1 <Directory /var/local/fcgi> # Use the + so we don't clobber other options that # may be needed. You might want FollowSymLinks here Options +ExecCGI </Directory> AddType application/x-httpd-php5 .php AddHandler fastcgi-script .fcgi Action application/x-httpd-php5 /fcgi/php-cgi-wrapper.fcgi </IfModule> Here's my fastcgi wrapper: #!/bin/sh PHPRC="/etc/php5/apache2" export PHPRC PHP_FCGI_CHILDREN=8 export PHP_FCGI_CHILDREN exec /usr/bin/php-cgi Any help would be very very much appreciated!

    Read the article

  • How to set up Zabbix to monitor SQL Server Failover Active-Passive Cluster?

    - by Sebastian Zaklada
    It should be simple, so it is just most likely my approach being totally off and someone will hopefully prod me into the right direction. We have a Zabbix 2.0.3 server instance set up monitoring a bunch of different servers, but now we need to set it up to monitor and notify any alerts in regards to the SQL Server 2008 R2 Failover Active-Passive cluster. Essentially, this is a 2 servers cluster, when only one of its nodes can be "active" at a given time, serving all SQL Server related requests, while the other server just "sleeps" and from the point of anyone logged on on that server - has all of the SQL Server related services in stopped state. We have tried setting up Zabbix agents on both servers, using SQL Server 2005 templates (we could not find any 2008 specific ones and the 2005 ones always seemed to be working just fine for monitoring 2008 R2 instances) and configuring Zabbix server for both of the servers, but we end up having constant alerts for the server being currently the passive one in the cluster. We have been able to look up various methods of actually monitoring the failover, but we have not been able to find any guidance in regards to how to instruct Zabbix, that in this particular case, only one of the servers in the group is expected to be in the online state, while the other can be just discarded and should not raise any alerts. I hope I made myself clear. Thanks for any guidance. I am out of ideas.

    Read the article

  • Sql Server 2005 database lost, How to recover all records. MDF/LDF size is same as it should be

    - by Shantanu Gupta
    Few months back, I installed a sql server 2005 on one of my client machine. I gave him a backup option to take backup timely but he never took any backup. Today he called me that "i m not able to see any record of mine." I visited at my clients system and saw that none of the record was present on the tables. There was not even a single row in any of the tables. Then I checked if he has any backup file which i found to be absent. I asked him the reason what could be the possible cause. He said it might be due to virus. After this I checked the size of mdf and ldf file and found it should be what it is. when i created his server mdf ldf file had 2MB of database now it is 83 MB and 193Mb mdf/ldf respectively. This shows the data is still present in it but it is not being displayed. What could be the possible cause and how can i restore all data back to my tables ?

    Read the article

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

  • Bare-metal virtualisation for the desktop

    - by Andrew Taylor
    Hi, Does anyone have any knowledge about bare-metal virtualisation products? I'm interested in building a new desktop machine for home, I've been looking at the Intel Quad Core processors and I'd like to put 8GB of RAM in there, but, it got me thinking about making the most out of the available resources. I thought if I could get a good 64bit machine, put some bare-metal virtualisation on, then have a primary system, I'd also be able to bring up some extra virtualised systems as and when I needed. I know most of the bare metal systems are designed for the server market, but, is there anything out there that works well for a desktop. What are the caveats? I presume I won't be able to make the most out of any video cards I could buy, what about just getting a decent screen resolution, will this be a problem? I run a single 24" screen. What about DVD/CD writing, is this possible? I'd like to re-rip my CD collection, I was hoping the quad 64Bit goodness would help me out with the encoding. I currently use a Mac and couldn't go back to windows so that leaves Linux, I was thinking a primary OS of ubuntu. Does this make a difference? Thanks Andrew

    Read the article

< Previous Page | 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035  | Next Page >