Search Results

Search found 21322 results on 853 pages for 'vs 2008'.

Page 653/853 | < Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >

  • Leopard Network Shares and browsing are unreliable

    - by EvilChookie
    I have two macs, running Leopard 10.5.8. One is the 13" MBP connected via WiFi, and the other is a 24" 2008 iMac, connected via ethernet. There are at least another 6-10 machines (windows and mac) awake on the network (with shares) at any given time, yet there are plenty of times where I cannot see any devices/shares in either my "Shared" section in Finder, nor can I see any computers in "Network" in Finder. Restarting doesn't help. I've restarted all the networking gear in the house to no avail. Our network is a series of gigabit switches connected to a D-Link gaming router. I believe we use OpenDNS, and our provider is Cox. I hate having to use "Go - Connect to Server" to browse to commonly used file shares (by IP). I'd like to know why my shares do not always and consistently appear in Leopard. Edit: I ran OnyX this morning, and performed the cleaning and maintenance operations (including disk permissions) on both my Macs, and at least one of my macs has started showing network devices again. (the other is still going). No idea how long this will last. Any ideas as to what is causing this issue, and how to prevent it? Edit 2: Aaaand there the shares go again. So running OnyX is not a permanent or reliable fix for this issue. Edit 3: After a clean reinstall and update, network shares are still unreliable. The SMBClient command mentioned in comments shows me the information it's supposed to show, but the shares do not appear in the shared section. They'll also vanish at random and reappear at random throughout the day.

    Read the article

  • Retail windows xp prof sp 2 Lost key but have the genuine cd , what to do?

    - by AdityaGameProgrammer
    I had recently formatted my system only to find out i have lost the cd key to my original cd. i had used the option to enter the product key later. Yes, i know its a stupid thing to do but i bought the cd in 2008 from a retail store and i lost the original packaging. the actual label on the cd is includes service pack version 2002 .@2004 microsoft corporation reserved. There are some numbers on the back side of the cd in the inner ring. i cant for the life of me figure out how what is the use of the genuine cd i have with me when i cant seem to activate it. what exactly is the advantage of having the original cd in your possession in situations like this?. i have tried the unattend.txt and it doesnt contain the correct key. and there does not exist any winnnt.sif file in the cd. where on the cd or in it can i find the product id information i stay in india . and my attempts at trying the microsoft support site keeps getting me directed to page which says they had stopped support for windows xp in 2011. lets say by some miracle i do contact microsoft. what information would i have to provide them? and would they be giving me the product key for my cd key from their database? or a new key?

    Read the article

  • Macbook Pro 2010 Ethernet Jumbo Frame(9k MTU) Support?

    - by Troggy
    Has anyone been able to use jumbo frame support on their 2010 Macbook Pros? This is kind of negative news here, but I am seeing many reports that this is not available anymore due to Apple's choice of network card in the new machines. I cannot set my MTU speed over 1500 on my new 2010 MBP i7, but my old early 2008 MBP (Core2) has the 9000 MTU setting for use. Everything I have is setup to use jumbo frames and I thought apple kept that feature in their "pro" lineup. It sounds like the Mac Pro still has it. Did they decide to use a chipset that doesn't support it? I am trying to pinpoint some solid chipset numbers and the feature support. Maybe they just need to update the drivers? Is there some more official information about this feature? This might seem minor, but this is really frustrating if apple removed this feature from their pro laptop line. From what I have read so far, it sounds like I am not alone in my frustrations with this. http://discussions.info.apple.com/message.jspa?messageID=12258067 http://discussions.apple.com/thread.jspa?messageID=12130158 Anyone have any experience or further knowledge about this issue ... beyond typing my question into google and giving the top 5 results as answers? :)

    Read the article

  • Cannot start listening on a certain TCP port, but there's nothing currently listening on it

    - by John Rasch
    I have Windows Service that uses a WCF service host to listen for connections on TCP port 61000. When I try to start the service, I get the error: Service cannot be started. System.ServiceModel.AddressAlreadyInUseException: HTTP could not register URL http://+:61000/ because TCP port 61000 is being used by another application. ---> System.Net.HttpListenerException: The process cannot access the file because it is being used by another process at System.Net.HttpListener.AddAll() at System.Net.HttpListener.Start() at System.ServiceModel.Channels.SharedHttpTransportManager.OnOpen() --- End of inner exception stack trace --- at System.ServiceModel.Channels.SharedHttpTransportManager.OnOpen() at System.ServiceModel.Channels.TransportManager.Open(TransportChannelListener channelListener) at System.ServiceModel.Channels.TransportManagerContainer.Open(SelectTransportManagersCallback selectTransportManagerCallback) at System.ServiceModel.Channels.HttpChannelListener.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.Dispatcher.ChannelDispatcher.OnOpen(TimeSpan timeout) at... A quick netstat -a shows there is nothing listening on port 61000. I've also found several posts online that mention reserving namespaces using netstat, but the account that the service runs under has administrator privileges so that shouldn't be necessary. Any other ideas as to why I'm getting this message? This service is running on 64-bit Windows Server 2008 R2 Standard.

    Read the article

  • 'Cannot get iis pickup directory' in Windows Server 2012

    - by Meat Popcicle
    Our system moved from Windows Server 2003(Enterprise SP2) & IIS 6. And new system is Windows Server 2012(Standard) and IIS 6(for smtp mail) & 8. I copied all of web application files and IIS settings, another function is ok but.. email system is something wrong. for example, --------------------------------------------------------------------------------------- exception: system.Net.Mail.SmtpException: cannot get iis pickup directory. line 284: SendMail sendmail = new SendMail(); line 285: sendmail.GetSendMail(messagefrom, Useremail, mailsubject, message); stack trace: [SmtpException: cannot get iis pickup directory.] System.Net.Mail.IisPickupDirectory.GetPickupDirectory() +1894 System.Net.Mail.SmtpClient.Send(MailMessage message) +1956518 CommonDll.SendMail.GetSendMail(String messagefrom, String Useremail, String mailsubject, String message) +466 ASP.common_users_courserecordadd_aspx.AddBtn_Click(Object sender, EventArgs e) in d:\"sourcefile.aspx":285 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +115 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +140 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +29 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2981 Microsoft .NET Framework v:2.0.50727.6407; ASP.NET v:2.0.50727.6387 --------------------------------------------------------------------------------------- in Develop server(2008 R2 Ent SP1, IIS6 & 7.5), it works well. confused.

    Read the article

  • Intermittent "Lost connection to MySQL server at 'reading initial communication packet'"

    - by db2
    Our web environment consists of two servers. Web front-end. Dell PowerEdge R610, RHEL 5.5, Apache 2.2.17, php 5.2.14. Database server. Dell PowerEdge R710, Windows 2008 R2 Standard x64, MySQL 5.5.11-log x64. Normally these two work perfectly fine together. However, when I try to get them talking via a dedicated LAN on their secondary NICs (each machine has four of them), things get flaky. I have NIC #2 on both machines configured on the 172.16.1.0/24 subnet, with no gateway or DNS servers (obviously, since it's just those two systems), and I put the private IP address of each machine into the hosts file of the other. The routing tables on both machines look okay after I do this. I've tried this with both a crossover cable draped directly between the two NICs, and also via a dedicated vlan on the switch in the rack. In either case, I get intermittent connection problems. It's a fairly small percentage of connections that fail, but it's enough to cause a significant problem, and I have to switch back to the main network connection, which will contend with all the other traffic and hosts on the switch. The full error message that appears in the application log: SQLSTATE[HY000] [2013] Lost connection to MySQL server at 'reading initial communication packet', system error: 110 Am I doing something really dumb that's causing this to not work properly? Anything I can check in MySQL that would explain why it's failing to connect occasionally?

    Read the article

  • Enabling WinRM by Group Policy

    - by SaintNick
    I'm having partial success enabling WinRM through Active Directory GPO's on our Server 2008 R2 environment. I've created a GPO that enables "Allow automatic configuration of listeners" and also enables all the necessary predefined WinRM Firewall rules. This GPO works fine for our webservers. Indeed, this is reflected by the "Server Manager Remote Management" nicely flipping to "enabled" in Server Manager Server Summary. However, the same GPO applied to both our Management servers, which are Domain Controllers, does not give the same result. I see the GPO settings being applied, including the listener as confirmed by C:\Windows\system32>winrm e winrm/config/listener Listener [Source="GPO"] Address = * Transport = HTTP Port = 5985 Hostname Enabled = true URLPrefix = wsman CertificateThumbprint ListeningOn = 10.32.40.210, 10.32.40.211, 10.32.40.212 But in Server Manager, Server Summary, Remote Management remains on "disabled" and indeed when trying to connect to one of these machines Server Manager gives an "Access Denied". Manually enabling WinRM locally via Server Manager "Configure Server Manager Remote Management" on either of these machines works fine. What can be the cause? Can it have something to do with theses machines being DC's and needing extra settings in the GPO? Nick Reid

    Read the article

  • Trojan Horse Generic.15.apnz Virus Impossible to get rid of?!

    - by DaveDev
    Hi Guys I'm new to this forum so I'm unsure whether this is the right place for a question like this. I have a Trojan Horse infection that lives in memory and seems to be impossible to get rid of. I've tried a vew AntiVirus products (Norton, Windows Essentials & AVG Free) all to no avail & I've recently tried a few bootable antivirus solutions I found here: http://www.techmixer.com/free-bootable-antivirus-rescue-cds-download-list/ Kaspersky Rescue Disk 2008 (failed - it wouldn't even load the UI, is there a newer one out there?) F Secure Rescue Disk (Updates & Scanner ran, found 10 infections, said it was going to delete or repair them, but didn't get rid of them) Avira found a load of infections but it froze when I tried to interact with the UI after the scan. Every time I run these I'll load Windows after & run AVG, and it still finds Trojan Horse Generic.15.apnz (in Services.exe) & Trojan Horse Generic.16.ARSU (in svchost.exe) Is anyone familiar with a virus like this? Can anyone suggest how to approach a solution? Thanks Dave

    Read the article

  • Network profile reverts to 'Unidentified' following Windows Update reboot

    - by user140575
    I have searched high and low for a solution to this problem. I have multiple servers running Windows 2000 Server as well as Windows Server 2003, 2003 R2, and 2008 R2. All of these servers are on the same Active Directory domain. The servers run showing the network profile as Domain Network, which is fine and correct. However, when a Windows update is installed, the server changes the profile to Unidentified Network once it has rebooted. This then doesn't allow any traffic to the server. For security reasons, we can't turn the firewalls off for. The only way to fix the problem is to physically be in front of the machine and work on it to change the profile back. Once the Profile has been reinstated to the Domain profile, it will be fine until the next month's update. This happens on all the Windows software mentioned above. The machines are not all identical, so it's not a hardware problem either. If anyone can help I'd be very grateful.

    Read the article

  • Jira access with AJP-Proxy

    - by user60869
    I want to Configure the Jira-Acces over APJ-Proxy. I proceeded as follows (Following this howto: http://confluence.atlassian.com/display/JIRA/Configuring+Apache+Reverse+Proxy+Using+the+AJP+Protocol) : 1) In the server.xml I activate the AJP: 2) Edit VHOST Konfiguration: # Load Proxy-Modules LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so # Load AJP-Modules LoadModule proxy_ajp_module /usr/lib/apache2/modules/mod_proxy_ajp.so # Proxy Configuration <IfModule proxy_http_module> ProxyRequests Off ProxyPreserveHost On # Basic AuthType configuration <Proxy *> AuthType Basic AuthName Bamboo-Server AuthUserFile /var/www/userdb Require valid-user AddDefaultCharset off Order deny,allow Deny from all Allow from 192.168.0.1 satisfy any </Proxy> ProxyPass /bamboo http://localhost:8085/bamboo ProxyPassReverse /bamboo http://localhost:8085/bamboo ProxyPass /jira ajp://localhost:8009/ ProxyPassReverse /jira ajp://localhost:8009/ </IfModule> EDIT: In the logs if found follow: //localhost:8080/ [Fri Nov 19 14:51:13 2010] [debug] proxy_util.c(1819): proxy: worker ajp://localhost:8080/ already initialized [Fri Nov 19 14:51:13 2010] [debug] proxy_util.c(1913): proxy: initialized single connection worker 1 in child 5578 for (localhost) [Fri Nov 19 14:51:32 2010] [error] ajp_read_header: ajp_ilink_receive failed [Fri Nov 19 14:51:32 2010] [error] (120006)APR does not understand this error code: proxy: read response failed from (null) (localhost) [Fri Nov 19 14:51:32 2010] [debug] proxy_util.c(2008): proxy: AJP: has released connection for (localhost) [Fri Nov 19 14:51:32 2010] [debug] mod_deflate.c(615): [client xx.xx.xx.xx Zlib: Compressed 468 to 320 : URL /jira But It dosen´t work. Somebody have an idea?

    Read the article

  • Active Directory FRS problems. 13508 error and other problems

    - by ITPIP
    I have 3 Domain Controllers. We will call them DC1, DC2 and DC3. DC3 and DC2 show Event ID 13508 in their FRS logs with no follow-up event(13509 I think) to say the error had been fixed. DC1's FRS log no matter what you do never shows any events besides FRS service stopped and started. DC1 holds the SYSVOL that needs to be replicated to the other DC's. The other DC's sysvol folders are empty. I have tried the burflag method of fixing this but I haven't had any luck. My procedure for that was to stop all FRS services on all DC's. Then set the burflag on DC1 to D4 and the other two DCs burflag to D2. Started FRS on DC1 and the only event's I see in DC1's FRS event logs are service stopped and service started messages. This fact is leading me to believe that something is wrong on FRS for DC1. I believe there should be events 13553 and 13516 in the FRS event logs after an authoritative sysvol restore. The other two DC's do not have anything in their SYSVOL, otherwise I would have made one of them the authoritative sysvol. DC1 is MS Server 2003 Enterprise Edition SP2 DC2 is MS Server 2003 Standard Edition SP1 DC3 is MS Server 2003 R2 Standard Edition SP2 I did not setup this domain originally but I am now the administrator of it, so I don't have a lot of background on why certain things may have been done in the past. My main goal is to try and fix these issues to get myself better prepared to decommision DC1 and add a DC running Server 2008 to my domain. Thanks.

    Read the article

  • Drop database on DB2 9.5 - SQL1035N The database is currently in use

    - by Tommy
    I've never got this working the first time, but now I can't seem to do i at all. There is a connection pool somewhere using the database, so trying to drop the database when an application is using the database should give this error. The problem is there are no connection to the database when I issue these commands: db2 connect to mydatabase db2 quiesce database immediate force connections db2 connect reset db2 drop database mydatabase This allways give: SQL1035N The database is currently in use. SQLSTATE=57019 running this command shows no connections/applications DB2 list applications I can even deactivate the database, but still can't drop it. db2 => deactivate database mydatabase DB20000I The DEACTIVATE DATABASE command completed successfully. db2 => drop database mydatabase SQL1035N The database is currently in use. SQLSTATE=57019 db2 => Anyone got any clues? I'm running the cmd-windows as the local administrator (windows 2008) and this is also the admin for DB2. The connectionpool-user cannot connect during quiesce-state.

    Read the article

  • Enabling Samba Shares Across Subnets

    - by John
    I was curious how I could go about setting up SAMBA so that shares could be seen and used across different subnets. We have some Linux devices that are bound to Active Directory and we would like to have them serve SAMBA shares to clients that will reside in a different subnet than what the servers reside in? Is there any way to do this without needing to setup a WINS server or use legacy NetBIOS methods since the majority of our clients are Windows 7, Windows Server 2003, Windows Server 2008, and Macintosh OS X (10.6 or newer)? EDIT Right now, only clients in the same subnet as the SAMBA server can see the shares. Clients outside of the subnet (i.e. the client subnet) cannot see or connect to the share. The error returned is: The specified network name is no longer available. It does not seem to matter if I use IP, FQDN, or NetBIOS name to try and connect to the share with. We have a common Cisco router handling the inter-subnet routing. Everything else seems to work correctly with this network setup and the device can be pinged from multiple subnets. I also do not believe it to be a firewall type of issue since the rules for this segment are rather lax.

    Read the article

  • atl90.dll version 9.0.30729.4148 is missing in WinSxS folder

    - by mkva
    Hi I have the following problem: when starting Visual Studio 2008, it says "Cannot find one or more components. Please reinstall the application." and stops. With the help of Sysinternals ProcessMonitor, I found out that Visual Studio could not load the atl90.dll 9.0.30729.4148 from the WinSxS folder. I tried to manually copy the older atl90.dll 9.0.30729.1 with the result that Visual Studio works again. Now I call this a dirty workaround, and not a solution. Plus I still don't know the reason why the atl90.dll disappeared in the first place. So my questions: - Does anyone know of a reason why this might have happened? - Does anyone know a real solution to the problem, e.g. a Microsoft download that includes the atl90.dll in the correct version 9.0.30729.4148 that installs into WinSxS? Some details: - WinXp SP3 - missing DLL: C:\WINNT\WinSxS\x86_Microsoft.VC90.ATL_1fc8b3b9a1e18e3b_9.0.30729.4148_x-ww_353599c2\atl90.dll - workaround DLL: C:\WINNT\WinSxS\x86_Microsoft.VC90.ATL_1fc8b3b9a1e18e3b_9.0.30729.1_x-ww_d01483b2\atl90.dll - manifests in WinSxS seem to be alright, but unfortunately all point to the missing version 9.0.30729.4148 Thanks, Markus

    Read the article

  • How to install QEMU on Damn Small Linux?

    - by user2934303
    i'm trying to install QEMU on a Damn Small Linux installation in order to emulate pentium features in a 486 computer. Though DSL was descontinued, it's the only linux that runs reasonably on the 486 processor, most recent kernels doesn't even boot on 486 architecture. I tried Tiny Core Linux, but it doesn't work in 486, so i seem to have no escape here. The most recent image of DSL is from 2008, it uses kernel 2.4.x, and i couldn't find a way to compile QEMU on it. Firstly, it lacks several compile tools needed for compiling it, and, it have several dependency problems. I tried some pre-compiled packages, but the only one that worked was a QEMU 5.2 RPM package (it didn't had dependency problems), and it was way too old, it wasn't capable of running windows yet, it just gave me the option of emulating a code, not a full OS as windows, and it also didn't give me the option to choose which architecture i wanted it to emulate (-cpu option). Can anyone help me with this? Also, if someone can think of some alternative to it, i'd be grateful. Thanks.

    Read the article

  • Unusual Caching Issue with IE 7/8 and IIS 7

    - by Daniel A. White
    We recently moved a site into production running Server 2008 x64 and IIS 7. The ASP.NET pages apparently load just fine, but when it comes to IE 7 and 8, a weird caching issue has cropped up with the CSS and JavaScript files on the page. On a very sporadic schedule, IE does not get all the files necessary to compose the page (i.e. CSS and JS files). When I manually go to the missing files from the address bar, they come back from local cache as empty. I F5 these source files and magically they come down properly. I refresh the site after loading a few files and the cache seems to hold. This problem has only been reproduced (again, sporadically) on IE 7 and 8 running XP. Chrome and Firefox appear to be immune. We have set IIS to use server-side kernel caching for CSS, JS and images. We also have set to expire content for the App_Themes and Scripts directories to expire immediately. One initial thought it was a SWF loading an FLV on page load. These fixes have not remedied the problem. We had no problems on our staging server which is using Server 2003 and IIS 6. Any ideas would be greatly appreciated. P.S. It sounds similar to this problem: but we do have the Static Content module installed. http://serverfault.com/questions/115099/iis-content-length-0-for-css-javascript-and-images

    Read the article

  • ReadyBoost in Windows 7

    - by Robert Koritnik
    I've bought an SD card today for my phot frame, but when I inserted it into my notebook I saw I could use it for ReadyBoost. Some background I'm a .net developer, using VMs and developing web applications (and Sharepoint). I use an HP notebook machine with Core 2 Duo 2GHz + 4GB RAM + 320 7200 HD. I simultaneously run Visual Studio 2010 with some plugins SQL Server Firefox with at least 10 tabs Chrome with about 5 tabs IIS VM with Server 2008 machine Sharepoint and occasionally also Photoshop and some InDesign as well. So I don't let my machine have a break. :D Question If I buy myself some really fast SDHC card (like SanDisk 16GB Extreme 30MB/s - is there anything faster) and use it with my Windows 7 ReadyBoost, will I see any performance gain? Is it going to work something similar to Seagate's HybridDrive Momentus with 4GB of solid state drive? What could I actually expect if I do put this card into my machine? And what would be recommended size? Observations I guess redirecting page file to it would speed up the system. Some VM machines on it would probably run faster as well because they could run parallel to HD host system I guess. Am I right or wrong?

    Read the article

  • Ballooning Mac OS X kernel_task and Wired memory usage. How to diagnose / fix?

    - by user28930
    I have a very strange issue, which I'm having a hard time diagnosing as to the root cause. I have a Mac Pro (2008, 8-core 2.8 GHz, 8800GT) with 14 GB of RAM (recently upgraded because of this issue!). When I boot my system and log in, vm_stat / top / Activity Monitor will show that kernel_task has about 150 MB allocated, and the machine has about 800 MB of Wired memory being allocated. Even initially, 800 MB seems an awful lot of wired memory to be allocated with no applications running - but, it gets worse. (NB: Wired is locked, unswappable memory) After a very short time, sometimes triggered by something as simple as launching a terminal, kernel_task will balloon to 8-900 MB of Real Mem (RSIZE), and Wired Memory will accelerate to 1.6 GB (implying that all the extra memory requests are for wired RAM in the kernel). If I quit everything (I.E: no running applications, bar an activity monitor or terminal to view top), there is no appreciable reduction in either kernel_task RSIZE, or Wired Memory usage. Going the opposite way, and loading the system with tasks also shows that wired memory does not get reduced - and that importantly it is not reduced in preference to heavy swapping. If I log out and log back in again, it reduces a bit (450 MB kernel_task, 1.28 GB Wired), but not back to the start. I'm not running any wacky kexts - and futhermore, kextstat shows no huge memory allocations there; the largest being com.apple.nvidia.nv50hal at about 4 MB of Memory. The machine feels overall more sluggish when this has happened - unsurprisingly because such a huge amount of RAM has been marked as non-pageable. So I have a few questions: 1) Is there a good way to diagnose what has allocated all of this wired memory? It's often over 2 times the kernel_task size, running no applications. The real memory total doesn't seem to add up - it seems that there is a bunch of RAM that isn't being accounted for anywhere. 2) What is happening to cause the kernel to suddenly require 6 times as much memory?

    Read the article

  • supervisord launches with wrong setuid

    - by friendzis
    I am trying to test a pilot system with nginx connecting to uwsgi served application controlled by supervisord running on ubuntu-server. Application is written in python with Flask in virtualenv, although I'm not sure if that is relevant. To test the system I have created a simple hello world with flask. I want nginx and uwsgi both to run as www-data user. If I launch uwsgi "manually" from root shell I can see uwsgi processes runing as appropriate user (www-data). Although, if I let supervisor launch the application something strange happens - uwsgi processes are runing under my user (friendzis). Consequently, socket file gets created under wrong user and nginx cannot communicate with my applicaion. note: the linux server runs as Hyper-V VM, under Windows Server 2008. Relevant configuration: [uwsgi] socket = /var/www/sockets/cowsay.sock chmod-socket = 666 abstract-socket = false master = true workers = 2 uid = www-data gid = www-data chdir = /var/www/cowsay/cowsay pp = /var/www/cowsay/cowsay pyhome = /var/www/cowsay module = cowsay callable = app supervisor [program:cowsay] command = /var/www/cowsay/bin/uwsgi -s /var/www/sockets/cowsay.sock -w cowsay:app directory = /var/www/cowsay/cowsay user = www-data autostart = true autorestart = true stdout_logfile = /var/www/cowsay/log/supervisor.log redirect_stderr = true stopsignal = QUIT I'm sure I'm missing some minor detail, but I'm unable to notice it. Would appreciate any suggestions.

    Read the article

  • Update BIOS on Sun Fire X4150 server

    - by Massimo
    I have some Sun Fire X4150 servers with a very old BIOS release (1ADQW015), which seems to have some compatibility problems with WMware ESX Server 3.5 and Windows 2008 R2 virtual machines; so I want to update the BIOS on them. The problem: according to this page, if your servers run ELOM (mine do), you first need to update to the latest ELOM release, then to the interim transition release, then finally you can update to the latest one. Ok, I'm willing to do that... but it looks like Sun (now Oracle) will happily let you download the latest firmware DVD (3.3.0), but it will not let you download the transition release (2.0) if you don't have a support contract. Well, I actuall don't care at all about the servers' management controllers (we don't even use them), so upgrading from ELOM to ILOM is totally irrelevant to me; but I need to update the servers' BIOS. So my question is: can I update the servers' BIOS to the latest version without doing the full ELOM-to-ILOM migration, or will this not work (or even make the servers unusable)? Do BIOS versions and SP ones need to be matched, or can one be updated without bothering with the other? Bonus question: if this whole ELOM-to-ILOM thing actually is needed in order to update the BIOS, can that 2.0 CD-ROM be obtained without having a support contract with Sun/Oracle (which we are definitely not going to sign, being that quite old hardware)? Update: I tried upgrading only the BIOS on one of the servers, and it didn't boot anymore. So it really looks like a full firmware upgrade is needed, and the management controller and BIOS versions should be kept in sync. So... where can I find that *&!£%$% 2.0 CD-ROM? Or at least the transition firmware that can be found on it?

    Read the article

  • Windows 7 Users unable to add Windows 2003 server printers

    - by TravBrack
    Hi there I just rolled out a few Windows 7 x64 machines and ran into this issue where non-admin users are unable to add printers hosted on a windows 2003 server. It works fine on a 2008 server. The issue appears to be with the point and print system. A user will attempt to add the printer, a prompt will come up requiring the user to elevate privileges in order to install a driver, and will fail citing 'access denied'. I found the group policy setting Point and Print Restrictions: When the policy setting is disabled: -Windows Vista computers will not show a warning or an elevated command prompt when users create a printer connection to any server using Point and Print. So I disabled it, verified that the policy was being picked up using rsop, but it still does the same thing. I've also tried the following: Recreating the printers using newer drivers Adding the printer using 32 bit drivers on the 2003 machine, then adding the 64 bit drivers on a Windows 7 machine Adding the printer from a windows 7 machine using print management None of these things work. The security settings are no different than the working printers. Help?

    Read the article

  • Remote Desktop Session Black after Minimize

    - by TorgoGuy
    PROBLEM: When I minimize a remote desktop session and restore it, the remote desktop screen shows up black. This only happens when connecting to a particular computer. DETAILS: If I start clicking around in the black area, portions of the screen will start redrawing and showing up correctly. For example, if I leave a window open in the remote session and click where that window is located on the remote computer, then that window--and only that window--will redraw, and sometimes a portion of that window won't redraw (usually the toolbar). And to clarify--the window only has to be minimized momentarily, so it doesn't seem to be a timeout issue. Clicking or typing in the remote session still causes the remote computer to respond appropriately. Disconnecting from the session and reconnecting restores the whole screen image, as does clicking all over the place in the black image (causing each section to redraw). CONFIGURATION: This problem only happens for me when connecting to a particular computer (a W2K Server box configured to allow remote administration) and only with certain client computers. I've tried 7 different client computers with various versions of Remote Desktop (the OSes were: Win2K, Server 2003, Server 2008, Windows 7 RC, 3 XP) and two of them exhibit the problem (one is one of the XP boxes and the other is Windows 7). Those same computers can RDP to other computers without problem. RESOLUTION ATTEMPTS: I have tried the following: Disable the LOCAL screen saver as mentioned on Technet Turned off bitmap caching in the client, as mentioned on many forums. Updated to version 6.1 of the remote desktop client Using mRemote (I doubted this would work since it uses MS's code for connecting to RDP servers) Turning off all video acceleration. QUESTION: Any ideas on what is causing this?

    Read the article

  • A Domain Admin user doesn't have effective Administrative rights on a Domain Computer

    - by rwetzeler
    I am a developer who is setting up a virtual domain environment of testing purposes and am having trouble with the setup. I have created a new DC on a new Forest... call it dev.contoso.com. I have setup a virtual internal network for all machines that are going to be apart of this virtual test environment and have given each machine a static IP address in the 192.169.150.0 subnet. I have added machine1.dev.contoso.com to the domain dev.contoso.com. I have also provisioned a user account (adminuser) in the domain and made that user a member of Domain Admins group. Upon logging into machine1 using my newly created Domain Admin account, I cannot access/run any files on machine1. When I go into the advanced permissions for the c:\ folder and goto properties - Security Tab - Advanced - Effective Permissions and search for the dev\adminuser (mentioned above), I get an error saying: Windows can't calculate the effective permissions for admin user What do I need to do to get Administrative rights on Machine1? I am using Server 2008 R2 for both the AD controller and machine1.

    Read the article

  • Terminal server performance over high latency links

    - by holz
    Our datacenter and head office is currently in Brisbane, Australia, and we have a branch office in the UK. We have a private WAN with a 768k link to our UK office and the latency is at about 350ms. The terminal server performance is reeeeealy bad. Applications that don't have too much animation or any images seem to be okay. But as soon as they do, the session is almost unusable. Powerpoint and internet explorer are good examples of apps that make it run slow. And if there is an image in your email signature, outlook will hang for about 10 seconds each time a new line is inserted, while the image gets moved down a few pixels. We are currently running server 2003. I have tried Server 2008 R2 RDS, and also a third party solution called Blaze by a company called Ericom, but it is still not too much better. We currently have a 5 levels dynamic class of service with the priority in the following order. VoIP Video Terminal Services Printing Everything else When testing the terminal server performance, the link monitored using net-flows, and have plenty we of bandwidth available, so I believe that it is a latency issue rather than bandwidth. Is there anything that can be done to improve performance. Would citrix help at all?

    Read the article

  • Installing Mysql Ruby gem on 64-bit CentOS

    - by Jacek
    Hi, I have a problem installing mysql ruby gem on 64bit CentOS machine. [jacekb@vitaidealn ~]$ uname -a Linux vitaidealn.local 2.6.18-92.el5 #1 SMP Tue Jun 10 18:51:06 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux Mysql and mysql-devel packages are installed. Mysql_config provides following paths: Usage: /usr/lib64/mysql/mysql_config [OPTIONS] Options: --cflags [-I/usr/include/mysql -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -fno-strict-aliasing -fwrapv] --include [-I/usr/include/mysql] --libs [-L/usr/lib64/mysql -lmysqlclient -lz -lcrypt -lnsl -lm -L/usr/lib64 -lssl -lcrypto] --libs_r [-L/usr/lib64/mysql -lmysqlclient_r -lz -lpthread -lcrypt -lnsl -lm -lpthread -L/usr/lib64 -lssl -lcrypto] --socket [/var/lib/mysql/mysql.sock] --port [3306] --version [5.0.45] --libmysqld-libs [-L/usr/lib64/mysql -lmysqld -lz -lpthread -lcrypt -lnsl -lm -lpthread -lrt -L/usr/lib64 -lssl -lcrypto] Trying to install: [jacekb@vitaidealn ~]$ gem install mysql -- --with-mysql-include=/usr/include/mysql --with-mysql-libs=/usr/lib64/mysql ... ERROR: Error installing mysql: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb --with-mysql-include=/usr/include/mysql --with-mysql-libs=/usr/lib64/mysql checking for mysql_query() in -lmysqlclient... no checking for main() in -lm... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lz... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lsocket... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lnsl... no checking for mysql_query() in -lmysqlclient... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. I would appreciate any help. Thanks for reading :).

    Read the article

< Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >