Search Results

Search found 21319 results on 853 pages for 'state management'.

Page 689/853 | < Previous Page | 685 686 687 688 689 690 691 692 693 694 695 696  | Next Page >

  • Can an image based backup potentially corrupt data?

    - by ServerAdminGuy45
    I'm considering doing image based backups (Acronis) on production Windows systems during non-peak hours. I'm just wondering if they can potentially lead to application data corruption. Lets say that I have a database that is getting hit pretty hard. Could I potentially have the beginning blocks of the database be commit ed to the image, data inserted into the db (which changes the beginning blocks of the DB on the server but not the image), then the blocks of data committed to the image (leading to an inconsistent state). Here's an example of what I'm trying to illustrate. Imagine a simple data structure which has a number in the front which represents the number of "a"s in a file. The number and data are delimited by a "-". For example: 4-ajjjjjjjajuuuuuuuaoffffa If an "a" is changed, the datastructure resets the number in the begining of the file such as: 3-ajjjjjjjajuuuuuuuboffffa I assume acronis writes block by block being a straight up image so here is what i'm invisioning happening with my database t0: 4-ajjjjjjjajuuuuuuuaoffffa ^pointer is here t1: 4-ajjjjjjjajuuuuuuuaoffffa ^pointer is here (all data before this is comitted to the image) t2: 4-ajjjjjjjajuuuuuuuboffffa ^pointer is here (all data before this is comitted to the image) Also notice how one of the "a"s change to a b. There are only 3 "a"s now t3: 4-ajjjjjjjajuuuuuuuboffffa ^pointer is here (all data before this is comitted to the image) The final image now reads "4-ajjjjjjjajuuuuuuuboffffa", while the true data is "3-ajjjjjjjajuuuuuuuboffffa" leading to a corrupt "database". Basically changes further down the blockchain could be reflected in the image, while important header and synchronization could already be committed. The out of date header information doesn't accurately reflect the structure of the blocks to come.

    Read the article

  • Are there any Microsoft Exchange Clients for iOS and Android that store their local data in an encrypted manner?

    - by Zac B
    I don't feel like this is a product recommendation question, more of a "does this tech even exist and is it feasible" question, but if I'm wrong, feel free to give this question the boot. Context: Our company has a bunch of traveling employees who access the company's Exchange server via thier iDevices or android phones, but because of the data protection laws in the state where our company is based (and the nature of the data our company works with), a recent security audit found that all mobile devices (laptops, phones, etc) operated by our company need to have all company correspondence and related data encrypted all the time. For laptops, that was easy: BitLocker or TrueCrypt, problem solved. For phones and tablets, however, I'm stumped. Sure, you can put lock screens/passwords on the phones, but the data is still accessible via external extraction, as law enforcement authorities already know. Question: Are there any clients for Microsoft Exchange that run on iOS or Android which store local data encrypted? The people using our mobile devices do a lot of their work while offline, so just giving them OWA access with SSL connection security isn't enough. Are there apps/technologies that present an additional login credential prompt to decrypt locally stored data in the app's storage area on the phone? My gut reaction when I started looking into this was "that doesn't sound like something Apple would allow into the App Store", but I've been wrong before...

    Read the article

  • Drop database on DB2 9.5 - SQL1035N The database is currently in use

    - by Tommy
    I've never got this working the first time, but now I can't seem to do i at all. There is a connection pool somewhere using the database, so trying to drop the database when an application is using the database should give this error. The problem is there are no connection to the database when I issue these commands: db2 connect to mydatabase db2 quiesce database immediate force connections db2 connect reset db2 drop database mydatabase This allways give: SQL1035N The database is currently in use. SQLSTATE=57019 running this command shows no connections/applications DB2 list applications I can even deactivate the database, but still can't drop it. db2 => deactivate database mydatabase DB20000I The DEACTIVATE DATABASE command completed successfully. db2 => drop database mydatabase SQL1035N The database is currently in use. SQLSTATE=57019 db2 => Anyone got any clues? I'm running the cmd-windows as the local administrator (windows 2008) and this is also the admin for DB2. The connectionpool-user cannot connect during quiesce-state.

    Read the article

  • Hyper-V Guests Dying

    - by Jon Rauschenberger
    I just hit my THIRD instance of a Hyper-V guest machine dying with the exact same behavior. In all three instances we are hosting WS2008 guests on a WS2008 host. AFter a config change, we reboot the guest and the guest OS comes up but in a very cripled state. Specifically, we are able to log into the guest, but can't launch any apps and the guest never comes active on the network. I opened a support ticket with MS the second time this happened and they focused in on the DCOM subsystem not coming up...best explanation they could provide was that permissions on key system files got corrupted. I eventually gave up on the ticket after close to 10 hours on the phone trying different things that were going no where. What really concerns me is that we have now seen the exact same thing happen to a guest hosted on a completly differet host machine. There is zero hardware overlap between the two. Has anyone seen this before?? It's really odd behavior, but it also seems like there's a pattern here that's concerning me. Thanks, jon

    Read the article

  • SharePoint Backup/Restore without stsadm

    - by Kevin
    Due to problems we found with the restore of sites/site collections using stsadm (our tasks generated from workflows were not restored), we've taken a different route for backup/restore. We plan a major customization to our SP site and want to take a backup so we can rollback in case the install fails. In our System Testing (not production) environment, we've backed up the 12 hive, the virtual dir's that the IIS points to SharePoint, and the SharePoint databases in SQL (using SQL server to do the db backups). We have custom event handlers and workflows built with Visual Studio, and deploy the dlls to the GAC as version 2 (signed and versioned in Visual Studio). So when we deploy, the GAC will contain 2 versions of the workflows - version 1 and version 2. During the deploy we use SP stsadm features to install/activate the WF's. We also go to each library and add the new, version 2 WFs. This automatically sets the version 1 WF's to "Not Allow" new instances (which is what we want) and the version 2 as active - perfect so far. When we've completed the install, we then assume a failure and attempt to restore to the same machines (SharePoint on one server, SQL on another). We start by uninstalling the version 2 WF's from the GAC, reset IIS (to clear cache of these ver. 2 WF dlls'), restore the 12-hive and virtual directory folders, then restore the SQL dbs. This is all just as manual as you read it - no stsadm here. All seems to work after our restore, it appears the restore was successful - the mods we made to column names, data changes, etc during the install are all reverted back to the original pre-install state. With one exception. When we run a workflow, it always fails and the Logs in the 12-hive indicates the WF is still trying to use the version 2 of the dll (System.IO file not found error) We think we've backed up and restored all the moving pieces of Sharepoint but we're missing something here, does anybody have any ideas why the version 2 WF dlls are still being referenced eventhough we restored all the folders and db's of SharePoint? Thanks, Kevin

    Read the article

  • PHP + MYSQL site perfomance

    - by Diego
    I have to manage a site which wasn't developed by me. It is in PHP using a mysql database, which is located in the web server. The site, sometimes (when the visitors increase too much) stops responding, or respond too slow. I have developed some sites in PHP but never took care of the management so really don't know where to start. The server (the hard) seems to be fine, when the web stops responding the cpu is being used at about 55% and has a lot of memory. I'm not asking someone to solve this issue. I only would really like if someone could give me a few tips about where can I find logs and how should I read and interpret them. So, that way I would be able to know if its the net traffic, the database (which queries), or what. Thanks! Update: Forgot to say: it is a Windows Server 2003. Note: I've recorded about a day with Jet Profiler. I don't really understand all the information it provides but there is one query which it marks as really slow. It makes sense because it is a select with a where clause which has three like condition. Initially I didn't include this in my question because when I run the query from MySQL Query Browser it doesn't take any long. It is under 0.01 seconds.

    Read the article

  • Remote search system for samba shares

    - by fostandy
    I have several shares residing on a samba server in a small business environment that I would like to provide search facilities for. Ideally this would be something like google desktop with some extra features (see below), but lacking this the idea is to take what I can get, or at least get an idea for what is out there. Using google desktop search as a reference model, the principle additional requirement is that it is usable from clients over the network. In addition there are some other notes (note that none of these are hard requirements) The content is always files, residing on a single server, accessible from samba shares. Standard ms office document fare Also a lot of rars and zips which it is necessary to search inside. Permissions support, allowing for user-based control to reflect current permission access in samba shares. The userbase will remain fairly static, so manual management of users is fine. majority of users will be Windows based I know there are plenty of search indexers out there: beagle and tracker seem to be the most popular. Most do not seem to offer access control and web-based/remote search does not seem to be high priority. I've also seen a recent post on the samba mailing list asking for pretty much the exact same thing. (They mention a product called IBM OmniFind Yahoo! Edition and while their initial reception seems positive, I am pretty skeptical. RHEL 4? Firefox 2? Updated much?) What else is out there? Are you in a similar situation? What do you use?

    Read the article

  • can't access SATA card config screen on boot, nor access the disks

    - by Ronald
    We've just upgraded our file server using an ASUS P6T WS Pro board, running FreeBSD-RELEASE 8.2 and using zfs to manage 12 WD20EARS disks. Since our 3ware card has been giving us trouble we started using the six on-board SATA connectors and got a SuperMicro USAS2-L8i to provide eight more ports. Mechanically, the card is an awkward fit but electrically it all seems ok. Upon boot, the LSI controller shows up and states that pressing ctrl-c will bring up the LSI Config Utility. When doing that, the message changes to state that the utility will be started after initialization, however that never happens. There does seem to be an error message that's only displayed too briefly to read and seems to be about PCI and "not enough space". (That message is pushed off by a hardware summary and I've found no way to scroll back at this point.) The disks do not show up in any recognizable ways after booting, either. I found a hint in another discussion to check the address mapping on either the card or the motherboard BIOS, but have found no way to do that. So what I tried on a hunch is to disable everything that's on-board, including network adapters, Firewire controller and SATA. In fact, after doing that, I can successfully launch the LSI Config Utility. As far as I can tell, all looks well in there, and when booting in that configuration it also displays a list of the disks connected to it, which looks just fine as well. Only problem now is that I can't boot that way, because I need the on-board SATA controller and network adapters. As soon as I re-enable any of them I'm back to square one. That discussion I mentioned about mapping addresses said to try D000, then D7FF, then DFFF, in order. The LSI Config Utility shows the card address as D000 but offers no way of changing it. Any tips or insights would be appreciated.

    Read the article

  • "postgres blocked for more than 120 seconds" - is my db still consistent?

    - by nn4l
    I am using an iscsi volume on an Open-E storage system for several virtual machines running on a XenServer host. Occasionally, when there is a very high disk I/O load on the virtual machines (and therefore also on the storage system), I got this error message on the vm consoles: [2594520.161701] INFO: task kjournald:117 blocked for more than 120 seconds. [2594520.161787] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162194] INFO: task flush-202:0:229 blocked for more than 120 seconds. [2594520.162274] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162801] INFO: task postgres:1567 blocked for more than 120 seconds. [2594520.162882] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. I understand this error message is caused by the kernel to inform that these processes haven't been run for 120 seconds, most likely because a disk access to the storage system has not yet been processed. But what is the effect on the processes. For example, will the postgres process eventually write its data when the storage system is idle again after a few minutes, so that all data is still consistent? Or will it abort the write, leaving some tables in an inconsistent state? I certainly expect that the former should be the case - if the disk access is slow, postgres (or any other affected process) should just wait as long as it takes. I can live with the application hanging for a few minutes. But if there is a chance for data corruption then any of these errors is really bad news. Please advise what to do here.

    Read the article

  • Biztalk 2009 logshipping with SQL 2008

    - by Manjot
    Hi, I am setting up biztalk logshipping for Biztalk 2009 database. Following http://msdn.microsoft.com/en-us/library/aa560961.aspx article, I am doing the following to setup biztalk logshipping on destination server: Enable Ad-hoc queries by: sp_configure 'show advanced options',1 go reconfigure go sp_configure 'Ad Hoc Distributed Queries',1 go reconfigure go sp_configure 'show advanced options',0 go reconfigure go Execute LogShipping_Destination_Schema & LogShipping_Destination_Logic in master on destinations server Run: exec bts_ConfigureBizTalkLogShipping @nvcDescription = '', @nvcMgmtDatabaseName = '', @nvcMgmtServerName = '', @SourceServerName = null, -- null indicates that this destination server restores all databases @fLinkServers = 1 -- 1 automatically links the server to the management database When I run this I am receiving the following error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. After some research I found some info : Usually this error means that the SQL Server Service Principal Name (SPN) was not configured, and NTLM was not being used as an authentication mechanism. SQl services are runing under different domain accounts. So, I asked the domain admin to create SPNs for the servers, SQL service accounts for beoth source and destination using name and FQDN. enabled computer name and service accounts for delegation. When I run the following: select * from sys.dm_exec_connections I get the the same error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON' Any help please?

    Read the article

  • Upgrading from SQL2000 database to SQL Express 2008 R2

    - by itwb
    Hi, We have a web application which uses a MSSQL 2000 backend database. We are currently paying a ridiculous amount for Shared Hosting, with the database costs alone costing us $150 per month (MSSQL 100mb extra space is $40 per month). Our database size is 896.38 MB I am looking at getting a Virtual Private Server and upgrading the database to a MSSQL2008 Express database. I am aware that the Express version is limited to a 10GB database (with R2), and is constrained to a single CPU. I have also been offered SQL Server 2008 Web Edition for $19/per month, but I cannot find many details on the difference between Express and Web. Any suggestions here? What I would also like to know is: If we upgrade the database to MSSQL 2008 database, is there any issues with possible data transformations in the future? I.e. Is it possible to download and mount it with SQL Server 2008 Standard edition? I'm more concerned about how to get data in and out of the database through SQL Management tools. Are there any other issues that I might face? Thanks, Mike

    Read the article

  • vsftp login errors 530 login incorrect

    - by mcktimo
    Using Ubuntu 10.04 on an aws ec2 instance. I was happy just using ssh but then a wordpress plugin needs ftp access...I just need ftp access for one site www.sitebuilt.net which is in /home/sitebuil. I installed a vftpd and pam and followed suggestions that got me to the following state /etc/vftpd.conf listen=YES anonymous_enable=NO local_enable=YES write_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES xferlog_file=/var/log/vsftpd.log secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd rsa_cert_file=/etc/ssl/private/vsftpd.pem guest_enable=YES user_sub_token=$USER local_root=/home/$USER chroot_local_user=YES hide_ids=YES check_shell=NO userlist_file=/etc/vsftpd_users /etc/pam.d/vsftpd # Standard behaviour for ftpd(8). auth required pam_listfile.so item=user sense=deny file=/etc/ftpusers onerr=succeed # Note: vsftpd handles anonymous logins on its own. Do not enable pam_ftp.so. # Standard pam includes @include common-account @include common-session @include common-auth auth required pam_shells.so # Customized login using htpasswd file auth required pam_pwdfile.so pwdfile /etc/vsftpd/passwd account required pam_permit.so session optional pam_keyinit.so force revoke auth include system-auth account include system-auth session include system-auth session required pam_loginuid.so /etc/vsftpd_users sitebuil tim /etc/passwd ... sitebuil:x:1002:100:sitebuilt systems:/home/sitebuil:/bin/sh ftp:x:108:113:ftp daemon,,,:/srv/ftp:/sbin/nologin /etc/vsftpd/passwd sitebuil:Kzencryptedpwd /var/log/vftpd.log Wed Feb 29 15:15:48 2012 [pid 20084] CONNECT: Client "98.217.196.12" Wed Feb 29 15:16:02 2012 [pid 20083] [sitebuil] FAIL LOGIN: Client "98.217.196.12" Wed Feb 29 16:12:33 2012 [pid 20652] CONNECT: Client "98.217.196.12" Wed Feb 29 16:12:45 2012 [pid 20651] [sitebuil] FAIL LOGIN: Client "98.217.196.12"

    Read the article

  • Does Hyper-V support SCSI Pass-through discs in a Server 2003 R2 VM?

    - by Peter Bernier
    I'm running into some difficulties getting pass-through disks to be accessible to a Hyper-v server 2003 r2 virtual machine. Host OS : Server 2008 R2 full w/Hyper-V role Guest OS : Server 2003 R2 (Windows Home Server) The guest's OS disk is a pass-through disk on the IDE controller (not the best solution, but I can live with it). My storage disks will be pass-through disks on the SCSI controller. I'm able to see all of the disks that I'll be using for the VM on the host without issue. The problem that I'm having is that I can't seem to get the guest OS to be able to 'see' the storage drives (as pass-through disks on the SCSI controller). Here's what I'm doing : On the host, the storage drive is set to 'Offline' just like the OS disk (this is required for pass-through to work). In the VM, the storage drive is on the SCSI controller. Hyper-V Integration Tools are installed in guest. That's as far as I'm able to get. I don't see the drive in Computer Management, or in Windows Explorer (I've tried with an unformatted disk, as well as after formatting a partition). I am able to see a removable device that lists the disk's model number in the Guest, but I can't seem to access the storage. (I get an entry in Device Manager that needs drivers, but nothing on the Integration Tools disc works..) Trouble-shooting steps I've tried : If put the pass-through drive on the IDE controller, I can see it in the Guest. If put the storage drive 'Online' in the host and create a VHD on it on the SCSI controller, I can see it in the Guest. I suppose I could create a fixed-size VHD that consumes the entire disk, but I'd rather not have that overhead. I've also extracted the contents of the Integration Tools drivers (x86 and amd64) and tried pointing the disk controller to each of those, with no luck. Can anyone offer suggestions as to how I can get this to work properly?

    Read the article

  • Setup of high-end web server and DB server cluster on Amazon EC2: Is this how it's done?

    - by user1086584
    Amazon is so technical, I want to confirm that my understanding is correct. We have a large 500 GB database. (OrientDB.) We will have it mirrored to one another in the same Availability Zone. We believe the database size will grow rapidly. The plan is: Get 4 large instances that are compatible types with Placement Groups (as well as ideally, Enhanced Networking) (2 for web, 2 for DB.) We use an EBS-backed instances to store our operating system. Discussion here: http://alestic.com/2012/01/ec2-ebs-boot-recommended We can set up ephemeral SSD instance storage as swap space. (But it is lost after even a reboot. I hear its hard to add ephemeral storage if booting from EBS, but possible.) For offsite backup, we will take periodic snapshots and store them on S3. Obviously we need to ensure the database is in a safe state when that snapshot happens to avoid corruption. (Any hints here, aside from shutting down the DB?) If the database gets too big, we need to create a EBS volume that's larger. We can use RAID to break the 1 TB limit: http://alestic.com/2009/06/ec2-ebs-raid Static assets on web servers will be stored on S3. Is that correct? Or am I missing something?

    Read the article

  • suggestions for firewall/router project using *BSD or Linux

    - by Adeodatus
    Hi All, I have a project in mind and I'd love to hear some ideas on some open source solutions with COTS hardware. I have a few 24 and/or 48 port managed layer2 switches with customers potentially on each port (though its usually about 20-30). Right now the switch has a bridged network and backhaul the traffic to our core to a centralized DHCP server. I need to move them to a NAT solution and, while doing this, I'd like to protect the customers on each port from the customer traffic on the other ports. I also need to be able to port forward from the public side of the firewall/nat box to specific hardware on the inside of the nat machine (easy enough, I know). My first thoughts are to build an appliance-like box (the fewer moving parts the better) that can do filtering and NAT with rfc1918 an address range being handed out via a DHCP server on the appliance. A caching DNS server on the appliance would be a plus since we backhaul everything to the core. I'd like to run FreeBSD but I'm open. Now, to try to limit the broadcast traffic thats visible I was thinking of doing each port on the switch as a different vlan and have the switch do trunking to the private NIC on the FreeBSD/appliance. I'd probably need to do some magic on the freebsd NIC to get this working but it should. We have the parts to build these systems. So, does this make sense? Are there any other solutions out there that we don't have to spend money on but can use our parts to create something? Are there any good distros that could do this already (monowall)?? I may or may not admin this solution so a secure web configuration and management tool would be a plus in the other admins' minds. Thoughts?

    Read the article

  • Amazon EC2 Reserved Instances: "Heavy Utilization" clarification

    - by gravyface
    Should be another easy one here, but I need clarification on what they define as "heavy utilization" for Reserved Instance types. From their Website: Heavy Utilization RIs – Heavy Utilization RIs offer the most absolute savings of any Reserved Instance type. They’re most appropriate for steady-state workloads where you’re willing to commit to always running these instances in exchange for our lowest hourly usage fee. With this RI, you pay a little higher upfront payment than Medium Utilization RIs, a significantly lower hourly usage fee, and you’re charged that lower hourly rate for every hour in the Reserved Instance term you purchase. Using Heavy Utilization RIs, you can save up to 41% for a 1-year term and 58% for a 3-year term vs. running On-Demand Instances. If you’re trying to find a break-even utilization, you’re economically advantaged using Heavy Utilization RIs (vs. On-Demand Instances) if you plan to use your instance more than 43% of a 1-year term or 79% of a 3-year term. I'm assuming that, if I'm planning on running a 24/7 Web Server, then regardless of how many resources I consume (bandwidth, cpu cycles, memory), I would want to go with a Heavy Utilization Reserved Instance? This one Web Server in particular will likely barely budge the cpu, but it needs to be up and running 24/7. Not 100% on what they're defining as "heavy".

    Read the article

  • NetApp FAS270 head doesn't see disks

    - by wfaulk
    I have an FAS270C. For months, I've been running it in a split-head manner (that is, with each head serving data totally independently, and without any clustering even being enabled) in order to facilitate moving some data around. I finally got everything situated, moved all the data to one of the heads, and was trying to get clustering set back up. Now when I try to install OnTap onto the "new" head, it cannot see any of the disks in the head shelf. (That is, the shelf into which the heads are inserted.) I've booted into maintenance mode, and it shows me that the 0b adapter, which should be the adapter that that shelf and its disks should be presented on, is in "OFFLINE (physical)" state. If I try to enable it with either "storage enable adapter 0b" or "fcadmin online 0b", it waits for about 30 seconds and then says: [fci.initialization.failed:error]: Initialization failed on Fibre Channel adapter 0b. [fci.adapter.online.failed:error]: Fibre Channel adapter 0b failed to come online. There is currently nothing attached to its external 0b port. I've tried it with and without an SFP plugged into it, and with and without its internal termination switch on. The currently active head can see those disks, and can see that two of them are assigned to the other head. Before I started reconfiguring, the "new" head could see disks on that shelf. They may even be the same disks that OnTap was installed on previously. Does anyone have any idea how to proceed?

    Read the article

  • Windows 7 Users unable to add Windows 2003 server printers

    - by TravBrack
    Hi there I just rolled out a few Windows 7 x64 machines and ran into this issue where non-admin users are unable to add printers hosted on a windows 2003 server. It works fine on a 2008 server. The issue appears to be with the point and print system. A user will attempt to add the printer, a prompt will come up requiring the user to elevate privileges in order to install a driver, and will fail citing 'access denied'. I found the group policy setting Point and Print Restrictions: When the policy setting is disabled: -Windows Vista computers will not show a warning or an elevated command prompt when users create a printer connection to any server using Point and Print. So I disabled it, verified that the policy was being picked up using rsop, but it still does the same thing. I've also tried the following: Recreating the printers using newer drivers Adding the printer using 32 bit drivers on the 2003 machine, then adding the 64 bit drivers on a Windows 7 machine Adding the printer from a windows 7 machine using print management None of these things work. The security settings are no different than the working printers. Help?

    Read the article

  • ReadyBoost in Windows 7

    - by Robert Koritnik
    I've bought an SD card today for my phot frame, but when I inserted it into my notebook I saw I could use it for ReadyBoost. Some background I'm a .net developer, using VMs and developing web applications (and Sharepoint). I use an HP notebook machine with Core 2 Duo 2GHz + 4GB RAM + 320 7200 HD. I simultaneously run Visual Studio 2010 with some plugins SQL Server Firefox with at least 10 tabs Chrome with about 5 tabs IIS VM with Server 2008 machine Sharepoint and occasionally also Photoshop and some InDesign as well. So I don't let my machine have a break. :D Question If I buy myself some really fast SDHC card (like SanDisk 16GB Extreme 30MB/s - is there anything faster) and use it with my Windows 7 ReadyBoost, will I see any performance gain? Is it going to work something similar to Seagate's HybridDrive Momentus with 4GB of solid state drive? What could I actually expect if I do put this card into my machine? And what would be recommended size? Observations I guess redirecting page file to it would speed up the system. Some VM machines on it would probably run faster as well because they could run parallel to HD host system I guess. Am I right or wrong?

    Read the article

  • How do I apply multiple subnets to a server with one NIC?

    - by Cosban
    I am trying to route multiple IPs through one physical NIC on my dedicated server for use with Proxmox KVM VMs. I have a dedicated server which is currently running Debian 4.4.5-8 with 3 available ip addresses for use, which will be displayed as 176.xxx.xxx.196 (main), 176.xxx.xxx.198 (on same subnet as main) and 5.xxx.xxx.166 (different subnet). I am currently trying to route the third IP address with the dedi for use with a vps that I have set up using proxmox v2.x but am having a really, really hard time doing so. Virtual interfaces binding the additional IP addresses work as expected, ruling out external routing problems. The provider has given the following information for the IP addresses on the main subnet: gateway: 176.xxx.xxx.193 netmask: 255.255.255.224 broadcast: 176.xxx.xxx.223 As well as the following information for the IP address on the second subnet: gateway: 5.xxx.xxx.161 netmask: 255.255.255.248 broadcast: 5.xxx.xxx.167 Everything I've tried with /etc/network/interfaces has either not worked, or has rendered the network completely useless. This is the current state of the file, which has the secondary IP address working on the same subnet as well as IPv6 working, but not the second subnet. # Nativen IPv6 Schnittstelle iface eth0 inet6 manual # Bridge IPv4 Schnittstelle (176.xxx.xxx.193/27) auto vmbr0 iface vmbr0 inet static address 176.xxx.xxx.196 netmask 255.255.255.224 gateway 176.xxx.xxx.193 broadcast 176.xxx.xxx.223 bridge_ports eth0 bridge_stp off bridge_fd 0 bridge_maxwait 0 post-up ip addr add 176.xxx.xxx.198/27 dev vmbr0 auto vmbr1 iface vmbr1 inet static address 5.xxx.xxx.166 netmask 255.255.255.248 gateway 5.xxx.xxx.161 broadcast 5.xxx.xxx.167 bridge_ports eth0 bridge_stp off bridge_fd 0 bridge_maxwait 0 post-up ip addr add 5.xxx.xxx.166/27 dev vmbr1 # Bridge IPv6 Schnittstelle (Reichweite: xxxx:xxxx:xxxx:xxxx:xxxx:xxxx::/64) iface vmbr0 inet6 static address xxxx:xxxx:xxxx:xxxx:xxxx:xxxx netmask 64 up ip -6 route add xxxx:xxxx:xxxx:xxxx:xxxx:xxxx dev vmbr0 down ip -6 route del xxxx:xxxx:xxxx:xxxx:xxxx:xxxx dev vmbr0 up ip -6 route add default via xxxx:xxxx:xxxx:xxxx:xxxx:xxxx dev vmbr0 down ip -6 route del default via xxxx:xxxx:xxxx:xxxx:xxxx:xxxx dev vmbr0

    Read the article

  • SQL Server 2008 Remote Access

    - by Timothy Strimple
    I'm having problems connecting to my SQL Server 2008 database from my computer. I have enabled remote connections as described in this answer (http://serverfault.com/questions/7798/how-to-enable-remote-connections-for-sql-server-2008). And I have added the ports listed on the microsoft support page to our Cisco Asa firewall and I'm still unable to connect. The error I'm getting from the SQL Management Studio is: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.) (Microsoft SQL Server, Error: 10060) Once again, I have double and triple checked that remote connections are enabled under the database properties and that TCP is enabled on the configuration page. I've added tcp ports 135, 1433, 1434, 2382, 2383, and 4022 as well as udp 1434 to the firewall. I've also checked to make sure that 1433 is the static port that is set in the tcp section of the database server configuration. The ports should be configured correctly in the firewall since http/https and rdp are all working and the sql server ports are setup the same way. What am I missing here? Any help you could offer would be greatly appreciated. Edit: I can connect to the server via TCP on the internal network. The servers are colocated in a datacenter and I can connect from my production box to my development box and vice versa. To me that indicates a firewall issue, but I've no idea what else to open. I've even tried allowing all tcp ports to that server without success.

    Read the article

  • I get a Segmentation fault when doing apt-get util-linux

    - by Adam
    I've found that a lot of upgrade commands and Apache on my system are failing with Segmentation faults. I don't know if this is the main one, but a lot of packages depend on util-linux: root@myUbuntuHardyHeronServer:~# apt-get install util-linux Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: util-linux 1 upgraded, 0 newly installed, 0 to remove and 72 not upgraded. 20 not fully installed or removed. Need to get 0B/441kB of archives. After this operation, 0B of additional disk space will be used. (Reading database ... 20547 files and directories currently installed.) Preparing to replace util-linux 2.13.1-5ubuntu2 (using .../util-linux_2.13.1-5ub untu3.1_i386.deb) ... Unpacking replacement util-linux ... Segmentation fault dpkg: warning - old post-removal script returned error exit status 139 dpkg - trying script from the new package instead ... Segmentation fault dpkg: error processing /var/cache/apt/archives/util-linux_2.13.1-5ubuntu3.1_i386 .deb (--unpack): subprocess new post-removal script returned error exit status 139 Segmentation fault dpkg: error while cleaning up: subprocess post-removal script returned error exit status 139 Errors were encountered while processing: /var/cache/apt/archives/util-linux_2.13.1-5ubuntu3.1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Purge print driver cache on windows 7 with powershell script

    - by Doltknuckle
    [Background] We have been having trouble with our network clients suddenly being unable to print. They get an odd error with a hex code. We determined that something in the driver was messed up and we could resolve the issue by clearing the driver cache and reinstalling the driver. This happens to random computers every so often. We're assuming this is a bug with the latest Dell 2330dn driver since that is the only model that has this problem. [Problem] What we are looking to do is write a Powershell script that would clear the driver cache and redownload the driver. I see a ton of scripts out there to manage queues, servers, and ports, but nothing for local driver cache management. [Current Workaround] Since we have to do this manually, I'll write out the steps so you know what we want this script to replicate. Disable print spooler Restart machine Delete contents of: C:\windows\system32\spool\drivers\w32x86 Enable print spooler and start service. Delete the network printer object and re-add network printer off of server. [Request] I'm good enough with powershell to translate the above workaround into a pair of scripts. I'd like to find a more elegant solution then my current workaround. Any suggestions?

    Read the article

  • Repairing Damage to VMWare Virtual Disk

    - by Lachlan McDonald
    Evening all, I've got a considerable problem I'm hoping to get some resolution on. I had two VMWare 6.5 virtual machines, one running Ubuntu 9.10 and the other Ubuntu 10.04. I used 9.10 as a testing server, so I could install a LAMP environment to prepare some code. Over the months I took a number of snapshots of this VM just in case something went wrong, and did a full copy of the entire VM a month ago. I created the 10.04 VM when Lucid Lynx launched so I could continue development on a fresh install. To get the files over, I simply added the 9.10 virtual disk into the 10.04 VM, grabbed some of the files I needed, and dismounted it. Unknown to me at the time, the changes to the 9.04 virtual disk meant that I could no longer boot it with the 9.10 VM. I'd always get the "The parent virtual disk has been modified since the child was created." error. I decided this was a good time to backup all the critical files, but now whenever I open the 9.04 disk to get the data it isn't in the same state as it was earlier. My question is; is it possible when I'm mounting the virtual disk that I'm not seeing the most recent snapshot, or in my blundering, have I lost the virtual disk. Cheers

    Read the article

  • How to set up ProxMox 1.9 on VPN?

    - by Gnudiff
    Disclaimer: I have only rudimentary knowledge of VPNs. I would love to learn about them properly, however, at the moment I really need to make stuff work on short notice. I am trying to set up a ProxMox virtualization platform in an existing network. The network currently consists of several servers which have VMWare free edition. There is some sort of VPN defined in switch. In order for VMWare management interface to be accessible, there needs to be ticked a checkbox in the network settings for VPN and entered the VPN id. I didn't notice any such configuration option during ProxMox installation, so my Proxmox VE on the same physical server, using same manual IP settings (ip/nm/gw), is not accessible. As I understand I should touch the Proxmox's underlying Debian config in /etc/network/interfaces, but I have no idea, what should I aim for: do I specify the settings for eth0, do I make a virtual interface? How to make it accessible for both ProxMox VE and underlying future VMs? I read the ProxMox installation guide, but unfortunately it presumes better understanding of VPNs than I have. A config template or similar would be appreciated. Thanks in advance.

    Read the article

< Previous Page | 685 686 687 688 689 690 691 692 693 694 695 696  | Next Page >