Search Results

Search found 23614 results on 945 pages for 'update from'.

Page 747/945 | < Previous Page | 743 744 745 746 747 748 749 750 751 752 753 754  | Next Page >

  • Cannot access Domain Controller through VPN

    - by Markus
    In our small network there is a Windows 2008 R2 Domain Controller that also serves as Remote Access Server. For years, we could access this server and the resources in the network over a VPN connection without any problem. Since some time however, I am able to connect to the VPN, but my Windows 8 client (and another one I used for testing purposes) is not able to connect the domain controller afterwards. I can access any other server in the network, but there seems to be a problem regarding the trust between the client(s) and the server. If I connect the client to the network directly over a LAN cable, everything works as expected. Also I can connect to another server over VPN and open a RDP prompt to the DC without a problem. On the client, whenever I try to access the DC, I get an access denied message. I've tried to update the group policies both over VPN and LAN. Also, I've removed the client from the domain and re-added it. The client shows a message that Windows requires valid login information when connected to the VPN - but my credentials are valid. They work when I logon to the client when not connected to the VPN and also when connected to the LAN. Turning off the firewall on the client and the server did not change anything. DNS resolution works both on the server and the client. What else can I do to diagnose and solve the problem?

    Read the article

  • How can I diagnose what's causing Outlook 2007 when sending an attachment to fail with error 800CCC0F even though the message was sent?

    - by James
    As the title suggests, I've got an issue where outlook 2007 is reporting it failed to send email with error 800ccc0f (unexpectedly terminated connection) but only with attachments. The email is actually sent, but outlook keeps retrying (stays in the outbox), generating more emails to the original recipient (which do get delivered) I've got QMail on the server side supporting a half dozen domains. It doesn't appear to matter which account I send from. I can successfully send attachments via alternate mail clients (webmail, thunderbird) while outlook is failing, or send messages without attachments; so it's seemingly not the accounts themselves or serverside, which leaves outlook as the culprit. There doesn't appear to be any pattern to the failures, and it's not consistent (I successfully sent an attachment as recently as 3 weeks ago) so I'm at a loss as to where to look. Qmail logs don't look any different between successes and failures. Has anybody seen this before/have a solution? UPDATE : It appears it's only PDF files that this occurs with, so I'm even more stumped. I can send html/docx/txt and zip, UNLESS the zip file contains a pdf ... whiskey tango foxtrot

    Read the article

  • Config nginx for slow connection to avoide corrupted doanlowds

    - by user1850273
    We have a Windows 2003 server that nginx 1.3.8 is running. Our problem is users with slow connction about 10K . Our server is serving our program update files and when they download from our server the downloade file is incompleted or crrupted. (Users can not download file with DL manager and the problem is in IE ) for example in slow connection a file with 25mb , after 2Mb downloaded finish . in high speed connections there is no problem. Also when we redirect these slow connection to other port F.e 50005 with the same config they download will be much better but not good as other servers. Which config we must apply to avoide such these download stops or corrupted downloads in slow connection ? this is our server config : worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent ' '"$http_user_agent"'; access_log logs/access.log main; sendfile off; keepalive_timeout 60; server { listen 80; server_name localhost; location / { root html; deny 127.0.0.3; index index.html index.htm; } } server_tokens off; } Our server use Htaccess password accounting and we can not use IIS on windows , Which soloution you think is better ? IIS with a extention to use apache htaccess ? Or use apache for windows insted of nginx ? Thank You.

    Read the article

  • newbie: Allow domain users to change power-savings settings

    - by user65007
    I've just recently installed SMS 2011 on a server and added several computers to it's domain. Now I've noticed that I cannot change power settings (even when logged in as user who is in Domain Administrator role, let's call it Admin for future reference). After some googling I ended up adding Admin to the local administrators group using Group Policy Management Editor (as I have no experience in server administration I'm not sure I did it right: I went to Policy Management, selected Forest: xxxxx - Domains - xxxxx - Group Policy Objects - Windows SBS Client - Windows 7 and Windows Vista Policy - go to Settings tab on the right and right-click on anything and select Edit to go to Group Policy Mangement Editor - User Configuration - Preferences - Control Panel Settings - Local Users and Groups - right-click on it and select New - Local Group, then set Action to "Update", Group Name to "Administrators (built-in)", and added Admin to Members). After that I was able to change the power-savings settings on client computers(when logged in as Admin). Now the question: what should I do to allow any domain user to change this settings? Notice, I do not want to force some predefined power plan to all computers, I want to set it up so that any domain user on any client computer would be able to select a different power plan and to make any adjustments to the selected one. Thank you for any suggestions, just keep in mind that I'm newbie (but not completely dumb), so please answer accordingly :)

    Read the article

  • Linux that restores itself on each reboot

    - by jettero
    I'm looking for methods and software to help create a variant of lubuntu that will restore itself to an install state and/or update on every boot. I'm thinking of doing things like putting the root filesystem on a squashfs and using unionfs and tmpfs to make root writable, but automagically restorable. I'm thinking of updating the squashfs with rsync. Perhaps there are other ways to approach the problem. Perhaps root needn't be writable at all. All thoughts welcome. The home dir would be writable in the usual way. The goal, if it matters, is a Linux that's simple to maintain from the home office, but that functions correctly for customers. We have some custom software that we wish for customers to be able to run trivially on equipment we provide. Ideally these devices would have a "restore to factory" function that would put it back the way we intended. If this is part of the normal boot cycle, so much the better. Why lubuntu? Personal preference for this application. It has a usable desktop, but doesn't take up much ram.

    Read the article

  • Windows Server 2008 R2 grinds to a screeching halt during file copy operations

    - by skolima
    When my Windows Server 2008 R2 machine is performing any large disk operations (copying 10GB files from one drive to another, copying similar file over network, merging HyperV snapshots, compressing large files), performance of the whole machine slows down terribly, everything becomes unresponsive. This is noticeable in any situation when the disk access is large enough not to fit in the cache. Are there any settings available for tuning this behaviour? I can accept slower file transfer if this would give me more responsiveness. System details: Dell Optiflex 960, Core 2 Quad Q9650, 8GB RAM, 2 SATA drives - 320GB (ST3320418AS) and 1TB (ST31000528AS), NCQ active on both, Intel 82564LM-3 Gigabit Ethernet, ATI HD 3450 graphics, Intel ICH10 bridge. We have multiple machines like this, every one is exhibiting the same behaviour. I though this was overkill for a workstation, apparently I was mistaken. Update: I guess I shouldn't have mentioned the HyperV at all. The above configuration is a standard workstation setup at the company I work for, this is not a server of any kind. I have at most 3 virtual machines working, and usually I'm the only person accessing them. Never the less, the slowdown occurs even when no VMs are running. On a Linux machine I'd simply ionice the copy process and I could forget about it, is there any way to manage IO priorities on Windows?

    Read the article

  • Removed Old Domain Trust. Now Progress (9.1D) can't open DB File

    - by RLH
    My company has an old server, running Progress 9.1D on a Windows 2000 VM, which was used by our company OS (Vantage 6 by Epicor.) Vantage was our primary OS for a very long time. About 2 years ago, we migrated to a larger, corporate OS and we cancelled our service contract with Epicor. Yesterday, we removed an AD trust between the corporate domain and our old AD domain we used in the days of Vantage. After restarting the virtual server, I have been able to start the ProService for 9.1D Windows service, however, I can not get Vantage to start back up. When I run the application, I get the error in the message listed below. Transcript: ** Could not connect to server for database [progress db file], errno 0. (1432) How can I fix this? FYI, I haven't had to work with Progress in years and even then I wouldn't have considered myself a "novice"-- I'm even less knowledgeable than that title would suggest. Vantage had a lot of internal tools and I recall that Epicor support managed to prevent .pf scripts from being executed. If there was a Progress specific patch that needed to be applied, you had to do it within the Vantage software OR they had to remote into the machine to fix this. I may not be able to run a .pf script but I do know that I can log into the console-based server application. (Yes, I can't even recall which utility that was called. It is sad.) It's been a long time and I never had to digg into Progress that much. Please help and feel free to ask questions. If you need more info, I'll update this post.

    Read the article

  • Linux can dual display but windows can't?

    - by Mr_CryptoPrime
    I have two monitors, one that is hdmi-to-dvi and the other vga-to-dvi. I have an AMD HD Radeon 6900 series graphics card installed. I got Ubuntu to display dual monitors, but then I restarted for an update and then Ubuntu wouldn't even boot in recovery mode, it just kept cycling forever...displaying something about "timeout: killing [filepath] [hexadecimal]. So I tried booting into windows (7-professional) and it crashed and displayed this after rebooting: IRQL_NOT_LESS_OR_EQUAL, PAGE_FAULT-IN-NONPAGED-AREA. I went into Bios and reverted to the system defaults and ran system restore and it booted ok, but the dvi-to-dvi monitor would not display. I made sure my drivers were updated and catalyst was updated. Also through research discovered only one is for vga and the other is digital only, so I put the vga-to-dvi in the vga slot and so on. Neither windows or catalyst will detect the dvi-to-dvi montior. Any suggestions? Thanks. EDIT: Found out that booting into Ubuntu with only one monitor (using either monitor with either cable) worked perfectly fine. I could then add another monitor and it displays ok. However, it will, out of the blue, suddenly distort the display. At first I thought the computer crashed, but it is something with the video output from the GPU to the monitor because I pushed the power button and it would refresh every 5secs or so and I could see the "Ubuntu will power down in X seconds", even though it was horribly distorted. Any ideas what's causing this?

    Read the article

  • How to protect folder privacy against unethical network administrators? [closed]

    - by Trevor Trovalds
    I just need a technical solution for the sake of my group's shared passwords, projects, works, etc. safety. Our network has Active Directory with public/groups/users and NTFS permissions, under a Windows Server 2003 which will soon migrate to Windows Server 2008 R2. Our IT crowd is small, consisting of 2 DBAs, 4 designers, 6 developers (including me), 2 netadmins and (a lot of) tech supporters, everyone has local admin rights. Those 2 network admins weren't the ones who set the network up, they just took the lift recently when the previous ones quit. We usually find them laughing at private contents from users stored in the groups AD, sabotaging documents that don't match their personal tastes and, finally, this week we found out they stole a project we (developers and DBAs) were finishing and, long before, they presented it to the CEO as theirs without us knowing. I'm a systems analyst, and initially my group decided to store critical content, like shared passwords, inside encrypted .zip files. Unfortunately we couldn't do the same to the other hundreds of folders and files, which included the stolen project, because the zipping process would take too long for every update. We also tried an encrypted Subversion repository under SSL, but there are many dummies (~38 atm) involved in the projects that have trouble using TortoiseSVN when contributing, and very oftenly we had to fix messed up updates. Well, I think these two give the idea of what we've been trying to reach. So, is there a practical "individual" protection for our extensive data or my hope can already be euthanized? P.S.: Seriously, at the place where I live/work, political corruption gone the wildest, so denounce related options are likely impracticable. Yet both netadmins have strong "political bond" with the CEO and the President, hence their lousy behavior and our failed delation attempts.

    Read the article

  • Can I use Upstart to start a script which requires the user's X session?

    - by ledneb
    I wrote a script which greps through the output of synclient to determine whether a laptop's touchpad has miraculously turned itself off (Ubuntu seems to /love/ doing this recently) and, if so, turns it back on. The script is something like this: #!/bin/bash while true ; do if [ `synclient | grep -e"TouchpadOff[\s]*1" | wc -l` -ge 1 ] ; then synclient TouchpadOff=0 fi sleep(3) done (I don't have the laptop to hand right now but you get the point! I will update later when I'm at my laptop if that's incorrect) So I tried running this as an upstart script so my touchpad can heal itself without any interaction. But it seems synclient doesn't find the current user's X session when my script is upstart'ed. I tried running it by using something like su -c myscript.sh ledneb in my script stanza, but to no avail. Should I be looking in the direction of /etc/X11/xinit/xinitrc rather than upstart? Is there a proper way to have this script run in the context of the current (or even a hard-coded) user's x session?

    Read the article

  • Suggestions for transitioning to new GW/private network

    - by Quinten
    I am replacing a private T1 link with a new firewall device with an ipsec tunnel for a branch office. I am trying to figure out the right way to transition folks at the new site over to the new connection, so that they default to using the much faster tunnel. Existing network: 192.168.254.0/24, gw 192.168.254.253 (Cisco router plugged in to private t1) Test network I have been using with ipsec tunnel: 192.168.1.0/24, gw 192.168.1.1 (pfsense fw plugged in to public internet), also plugged in to same switch as the old network. There are probably ~20-30 network devices in the existing subnet, about 5 with static IPs. The remote endpoint is already the firewall--I can't set up redundant links to the existing subnet. In other words, as soon as I change the tunnel configuration to point to 192.168.254.0/24, all devices in the existing subnet will stop working because they point to the wrong gateway. I'd like some ability to do this slowly--such that I can move over a few clients and verify the stability of the new link before moving critical services or less tolerant users over. What's the right way to do this? Change the netmask on all of the devices to /16, and update gateway to point to the new device? Could this cause any problems? Also, how should I handle DNS? The pfsense box is not aware of my Active Directory environment. But if I change DNS to use the local servers, it will result in a huge slowdown as DNS queries will still be routed over the private t1. I need some help coming up with a plan that's not too disruptive but will really let me thoroughly test the stability of the IPSEC tunnel before I make the final switch. The AD version is 2008R2, as are the servers. Workstations are mostly Windows XP SP3. I have not configured the 192.168.1.0/24 as a site in AD sites and services.

    Read the article

  • SQL Server 2012 memory usage steadily growing

    - by pgmo
    I am very worried about the SQL Server 2012 Express instance on which my database is running: the SQL Server process memory usage is growing steadily (1.5GB after only 2 days working). The database is made of seven tables, each having a bigint primary key (Identity) and at least one non-unique index with some included columns to serve the majority of incoming queries. An external application is calling via Microsoft OLE DB some stored procedures, each of which do some calculations using intermediate temporary tables and/or table variables and finally do an upsert (UPDATE....IF @@ROWCOUNT=0 INSERT.....) - I never DROP those temporary tables explicitly: the frequency of those calls is about 100 calls every 5 seconds (I saw that the DLL used by the external application open a connection to SQL Server, do the call and then close the connection for each and every call). The database files are organized in only one filgegroup, recovery type is set to simple. Some questions to diagnose the problem: is that steadily growing memory normal? did I do any mistake in database design which probably lead to this behaviour? (no explicit temp-table drop, filegroup organization, etc) can SQL Server manage such a stored procedure call rate (100 calls every 5 seconds, i.e. 100 upsert every 5 seconds, beyond intermediate calculations)? do the continuous "open connection/do sp call/close connection" pattern disturb SQL Server? is it possible to diagnose what is causing such a memory usage? Perhaps queues of wating requests? (I ran sp_who2, but I didn't see a big amount of orphan connections from the external application) if I restrict the amount of memory which SQL Server is allowed to use, may I sooner or later get into trouble?

    Read the article

  • Why still use JPG compression? [closed]

    - by Torben Gundtofte-Bruun
    Back when the JPG image format was introduced, it made a lot of sense to reduce the file size, even accepting a loss in image quality, because files were being downloaded over a slow and expensive modem connection. In today's world, file size is no longer a concern, at least not regarding JPG where it seems silly to save 45kB on a photo. But my image editing apps still prompt me for the desired compression level when I save a file. Does it still make sense to go with the default 85? Why should I not crank it up to 100 for all files? Update based on comments: For web work, I might use PNG instead. But every smartphone and camera produces JPG files. The question arises when I save these edits. Audience is my own harddisk. We're talking photos, 2-5MB apiece. Chroma, subsampling, DCT - sorry, never heard of it. I'm a home user, not Photoshop guru. For the record, I use Paint Shop Pro on Win, and Gimp on Linux.

    Read the article

  • HP LaserJet 1515: Disable "refill" warning

    - by Pekka
    I have a HP LaserJet 1515 connected to a Windows 7 PC. The Magenta cartridge is empty; the printer shows a warning to that effect, and won't let me print even black-and-white documents any more. I can't turn the warning off manually using the printer's small console: When I try to enter any menu, the display says "Menu access disabled". I have no idea why. There is a setting to override the warning, but it can't be changed using the Network interface in the browser (Although it is there on the status page) According to the manual,the HP printing tool is supposed to offer a switch for this, but it won't install on my Windows 7. It just rumbles about for half an hour, to magnificently exit with an "unknown error" requiring a reboot. On second look, the problem seems to be that Windows 7 just isn't supported. There is no download link for the tool when you specify Windows 7 as your OS. I just want to print a black-and-white-document on a printer whose black cartridge is still 65% full. Is this indeed impossible? On second thought, I'm cross-posting this on the HP support forum. I'll update here if anything comes up.

    Read the article

  • Paranormal activity in My Pictures folder: Thumbnail doesn't match actual picture.

    - by Sam152
    After finding an amusing picture on a popular imageboard, I decided to save it. A few days past and I was browsing my images folder when I realised that the thumbnail generated by Windows XP in the thumbnails view did not match the actual image. Here is a comparison image: What's even stranger in this situation is that the parts of the photograph that are different have actually been replaced with what might be the correct background. Furthermore, it is a jpeg (no PNG transparency tricks) that is 343 kilobytes but only 847x847 pixels wide. What could be going on here? Could there be anything malicious in the works, or hidden data? Before anyone asks, I have checked and preformed the following: Deleted Thumbs.db to reload thumbnails. Opened image in different editors. (they appear with the text) Moved image to a different directory. Changed the extension to .rar. All these steps produce the same results. Pre actual posting update: It seems that opening the image in paint, changing the image entirely (deleting entire contents and making a red fill) will still generate the original thumbnail, even after deleting Thumbs.db etc. I'm also hesitant to post the original data, in case there is something malicious or hidden that could be potentially illegal. (Although it would be very beneficial to see if it works on other computers and not just my own).

    Read the article

  • Print job leaves queue but document isn't printed

    - by midnightstar
    I'm dealing with an HP Deskjet F380 All-in-One printer. It's connected via USB to a desktop running Windows 7 Enterprise x64. If I attempt to print something like a web page or a word document, the print job will show up in the print queue and the printer would stir. By stir, I mean, it would seem to prepare itself to print. However, the print job would then leave queue (I'm thinking the computer sees it as completed) and the printer would never actually print anything. However I went into Printers and Devices under the Windows start menu, into printer properties, and print a test page, the test page would print out successfully. I attempted to uninstall and re-install the printer drivers for the printer, but the printer would continue the same behavior afterwards. I also connected the printer to another computer and the printer will print just about anything. I also checked to make sure that the computer the printer needs to be connected to was update to date as far as the OS. The machine is fully up to date. I played with the way the computer handles printer spooling. Under the printer properties, under the "Advanced" tab, I had the print job print directly to the printer. In all these instances, the same behavior continues. I've restarted the printer spooling service. I've also gone under C:\Windows\System32\spool\PRINTERS and deleted files that were sitting in the folder. I have ran SFC /scannnow and the system found no errors in the system's integrity. I had the computer and printer make a cold reboot individually. The only lead I really have going for me is that since the printer prints on other PCs, I can only assume that there is something wrong with the way the PC is configured.

    Read the article

  • Using Openfire for distributed XMPP-based video-chat

    - by Yitzhak
    I have been tasked with setting up a distributed video-chat system built on XMPP. Currently my setup looks like this: Openfire (XMPP server) + JingleNodes plugin for video chat OpenLDAP (LDAP server) for storing user information and allowing directory queries Kerberos server for authentication and passwords In testing with one set of machines (i.e. only three), everything works as expected: I can log in to Openfire and it looks up the user information in the OpenLDAP database, which in turn authenticates my user with Kerberos. Now, I want to have several clusters, so that there is a cluster on each continent. A typical cluster will probably contain 2-5 servers. Users logging in will be directed to the closest cluster based on geographical location. Something that concerns me particularly is the dynamic maintenance of contact lists. If a user is using a machine in Asia, for example, how would contact lists be updated around the world to reflect the current server he is using? How would that work with LDAP? Specific questions: How do I direct users based on geographical location? What is the best architecture for a cluster? -- would all traffic need to come into a load-balancer on each one, for example? How do I manage the update of contact lists across all these servers? In general, how do I go about setting this up? What are the pitfalls in doing this? I am inexperienced in this area, so any advice and suggestions would be appreciated.

    Read the article

  • Can't upgrade MySQL Server on new Ubuntu 12.04 install

    - by user179627
    After freshly installing Ubuntu server 12.04, I did the usual apt-get update / apt-get upgrade, which failed for mysql-server-5.5: Setting up mysql-server-5.5 (5.5.31-0ubuntu0.12.04.2) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured I tried a wide variety a approaches suggested by googling, which involved various combinations of apt-get remove/purge/install -f/reinstall, etc., with no luck. I also tried downloading the package directly from launchpad.net and running dpkg -i on it (this had worked for a similar issue with a kernel upgrade), but to no avail. I'm not actually particularly interested in what's going on with mysql, per se (though I will need to figure it out at some time); at this point, my primary concern is that I am unable to apt-get install other packages! What to do?

    Read the article

  • Low 'Burst Rate' from SATA drive in HDTune?

    - by UpTheCreek
    I recently upgraded my laptop's v slow hard drive to a seagate momentus 7200. Everything is working fine, but I'm a bit confused by these benchmark results: The burst rate is significantly less than the Maximim transfer rate, and not much higher than the normal minimum (if you ignore the spikes). What's going on here? On the HDtune website it defines Burst Rate as: ...the highest speed (in megabytes per second) at which data can be transferred from the drive interface (IDE or SCSI for example) to the operating system. Which begs some questions... e.g. if this is the highest, then how did the bechmarking tool record the 103MB/sec maximum? And if this really is the true maximum, then where is the bottleneck? The laptops SATA interface is on an Intel 82801GBM southbridge controller. When I check in hardware manager, I see that it's driver is iaStor.sys from 2005. Maybe that's the issue? I'll look for a newever version, but any insights would be appreciated. Thanks UPDATE: Acorting to this page on the HDTune website... An important parameter of the test is the Burst Rate. This value should always be higher than the maximum transfer rate. A lower value is usually an indication of a configuration problem. So what might be the configuration problem?

    Read the article

  • Advanced cell selection in Excel

    - by Supuhstar
    I am new to this flavor of StackExchange, so if this belongs elsewhere, please move it; I figured this would be the best place, though. I am making an Excel Worksheet that simply stores basic financial data in 5 columns (Check Number, Date of Transaction, Description, Profit from Transaction, and Balance After Transaction) and indefinite rows. Each worksheet represents one month, and each Workbook represents a year. As I make or receive a payment, I store it as a new row, which, inherently, makes the number of rows per month indefinite. Each transaction's Balance cell is the sum of the Balance cell of the row above it and the Profit cell of its row. I want each month to start off with a special row (first one after column headers) that displays a summary of the last month's transactions. For instance, the Balance After Transaction cell would display the last row's balance, and the Profit from Transaction cell would display the overall profits of the month) I know that if I knew every month had exactly 100 expenses, I could achieve this for March with the following formulas for profit and balance, respectively: =February!E2 - February!E102 =February!E102 However, I do NOT know how many rows will be in each month's table, and I'd like to automate this as much as possible (for instance, if I find a missed or duplicated expense in January, I don't want to have to update all the formulas that point to the ending January balance). How can I have Excel automatically use the last entered value in a column, in any given Excel spreadsheet, in a formula?

    Read the article

  • [SOLVED} How do I restore my audio after uninstalling Ventrilo?

    - by Marcx
    Hi, I've a Dell studio 1555 bought on september with Windows 7 64bit Professional on it. The audio device works proprerly, while listening to audio contents (from disk or internet) When I use Ventrilo, the audio from other people sounds good and I hear their voices clearly When I use any other VOIP programs like Teamspeak 3, MSN or Skype, I hear a disturbed voice, and it's impossible to comprehend something... Anyway everything worked fine until I installed Ventrilo, but removing it didn´t solve my problem. Update: Here's a sample of how I hear others people voices.. Audio Sample After some tests, also the desktop has the same problem. (I tried TeamSpeak3) Here are some details on my laptop and desktop Laptop Dell Studio 1555 Core 2 Duo P8600 2.4Ghz 4Gb Ram Dual Channel Ati HD 4570 512Mb dedicated (up to 2048) IDT High Definition Audio Desktop Motherboard Asus P5KPL-AM Dual Core CPU E5200 2.50Ghz 2x2GB PC6400 Dual Channel Ati Radeon HD 4650 512MB VIA High Definition Audio Both computers have Windows 7 Professional 64Bit. So how do I restore my audio? SOLVED The problem was in router firmware, there was a bug that recognized VoIP traffic as a DOS attack and the router grambled every packet... I've installed the newest firmware and everything is fine :)

    Read the article

  • The requested operation has failed! (cannot find answer)

    - by Geoff
    I know this problem is plastered all over the web but I've been searching and trying for hours with no luck. Can someone please give me some help? I originally installed Apache 2.0.64 along with PHP 5.2.17, I went through all of the steps in this tutorial with no luck, I found that the culprit was the LoadModule line. After looking on the internet I found a whole bunch of stuff but a lot of it was referring to PHP 5 and Apache 2.2. Since there seemed to be more info on apache 2.2 I removed apache 2.0.64 and installed 2.2. I added the code to LoadModule in the conf file but I got the same problem. I then followed the steps in this tutorial because it was slightly different with some things I hadn't tried yet but still I get the same problem. If I comment out LoadModule... it works fine but otherwise I get "The requested operation has failed!". This is what I ended up keeping since it works only having to comment one line. LoadModule php5_module "c:/php/php5apache2_2.dll" <IfModule mod_php5.c> AddType application/x-httpd-php .php PHPIniDir "c:/php" DirectoryIndex index.php </IfModule> EDIT: How can I stop getting this error message? UPDATE: Also, please note that I took note of the message in the PHP site that stated if PHP 5.2 was to be run with Apache to use the VC6 and not VC9. I had VC9 so I replaced it with VC6, the file is labeled php-5.2.17-nts-Win32-VC6-x86.zip

    Read the article

  • Experience with asymmetrical (non-identical hardware) SQL Server 2005 / Win 2003 cluster

    - by user24161
    I am reasonably good at dealing with SQL Server clusters; I am wondering if folks have experience, good or bad, using a mix of different models of servers from the same vendor in one SQL 2005 cluster. Suppose: I have one more powerful, more RAM, more shizzle box and one less powerful, less memory, less shizzle box bound together in a 2-node cluster. These would be HP DL380 and 580 machines (not that it should matter) I understand AND automate the process of managing memory for each SQL instance, so there's no memory contention when SQL instances fail over. Basically I am thinking a CLR proc will monitor the instances and self-regulate memory caps on each instance, so that they won't page or step on one another. I get the fact the instances might be slower and or under memory pressure if they share a "lesser" node, and that's OK. The business can deal with a slower instance in a server-problem scenario. Reasonable? Any "gotchas" to watch out for? More info 10/28: doing some experiments with a test cluster I find that reconfiguring max/min memory is OK PROVIDED the instance isn't already under memory pressure. If I torture the system with a huge query that demands a big chunk of RAM, and simultaneously adjust the memory allocation to a smaller value than what is being actively used, it's possible to run the instance out of memory and have it halt and restart itself (unhappy situation). Many ugly out-of-memory messages in the error log, crashing, burning... It's an extreme case, but good to know. Seems, then, that it would only be really safe to set this on startup of the instance, as in have a startup script that says "I am on node1, so my RAM settings are X or I am on node two, so they are Y," like this: http://sqlblog.com/blogs/aaron_bertrand... Update: I am testing a SQL Agent + PowerShell solution described in more detail here.

    Read the article

  • What changed between Excel 2007 and 2010 that is causing my copied worksheet save to fail?

    - by snorehorse
    When I do this in Excel 2010 this fails, but works in Excel 2007: Create a new workbook and insert an image onto a worksheet, or get a preexisting worksheet with an image. Copy the worksheet into a new workbook by clicking the worksheet tab and clicking Move Or Copy and then choosing (new workbook) as the destination. Close the source workbook. Attempt to save the new workbook. The message is: "Errors were detected while saving 'myfilepathhere.xlsx'. Microsoft Excel may be able to save the file b removing or repairing some features. To make the repairs in a new file, click Continue. To cancel saving the file, click Cancel". Clicking continue brings up another file dialog window followed by more repair errors. It seems behind the scenes it is looking to the source workbook when it tries to save the image in the new destination workbook. No useful error message, of course, thanks microsoft. But this problem never happened in Excel 2007. The reason why I am closing the source notebook before the save, is because I don't need the end user to see it after I programmatically pull a coversheet (with the image) from it, in an interop app. Thanks for any help. Update: I don't encounter this problem if I open the source workbook as "Read Only" (I do this programmatically using Excel Interop).

    Read the article

  • Cannot Install Windows 7 SP1 (64-bit)

    - by Clever Human
    I have tried every way I know how to get Windows 7 SP1 to install. It fails every time. Below is what looks like the relevant contents of the CBS.Log file. If there are further details that would help or more information I can gather, I will get it. 2011-08-15 10:32:52, Info CBS Startup: Package: Package_for_KB976902~31bf3856ad364e35~amd64~~6.1.1.17514 completed startup processing, new state: Installed, original: Installed, targeted: Installed. hr = 0x80070490 2011-08-15 10:32:52, Info CBS WER: Generating failure report for package: Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, status: 0x80070490, failure source: CBS Other, start state: Partially Installed, target state: Installed, client id: SP Coordinater Engine 2011-08-15 10:32:52, Info CBS Failed to query DisableWerReporting flag. Assuming not set... [HRESULT = 0x80070002 - ERROR_FILE_NOT_FOUND] 2011-08-15 10:32:52, Info CBS Failed to add %windir%\winsxs\pending.xml to WER report because it is missing. Continuing without it... 2011-08-15 10:32:52, Info CBS Failed to add %windir%\winsxs\pending.xml.bad to WER report because it is missing. Continuing without it... 2011-08-15 10:32:52, Info CBS SQM: Reporting package change completion for package: Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, current: Partially Installed, original: Partially Installed, target: Installed, status: 0x80070490, failure source: CBS Other, failure details: "(null)", client id: SP Coordinater Engine, initiated offline: False, execution sequence: 517, first merged sequence: 517 2011-08-15 10:32:52, Info CBS SQM: Upload requested for report: PackageChangeEnd_Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, session id: 101457924, sample type: Standard 2011-08-15 10:32:52, Info CBS SQM: Ignoring upload request because the sample type is not enabled: Standard I have downloaded the service pack and ran it from the EXE, I have installed it from Windows Update, I have ran all the "troubleshooting" trouble shots I could find. Nothing has worked so far. Any advice would be appreciated.

    Read the article

< Previous Page | 743 744 745 746 747 748 749 750 751 752 753 754  | Next Page >