Search Results

Search found 16311 results on 653 pages for 'environment variables'.

Page 538/653 | < Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >

  • How can I make WSUS less invasive for our users?

    - by Cypher
    We have WSUS pushing updates out to our user's workstations, and things are going relatively well with one annoying caveat: there seems to be an issue with a pop-up being displayed in front of some users informing them that their machine will be rebooted in 15 minutes, and they have nothing to say about it: This may be because they did not log out the prior night. Nevertheless, this is a bit too much and is very counter-productive for our users. Here is a bit about our environment: Our users are running Windows XP Pro and are part of an Active Directory Domain. WSUS is being applied via Group Policy. Here is a snapshot of the GPO that is enforcing the WSUS rules: Here is how I want WSUS to work (ideally - I'll take whatever can get me close): I want updates to automatically download and install every night. If a user is not logged in, I would like the machine to reboot. If a user is logged in, I would like their machine not to reboot, but instead wait until the next "installation period" where it can perform any other needed installations and reboot then (provided the a user account is not still logged in). If a user is to be prompted for reboot, it should only happen once per day (if possible), but every time they are prompted, they must have a way to postpone the reboot. I do not want users to be forced to restart their computer whenever the computer thinks it should happen (unless it's after an update installation and there are no logged in users). That doesn't seem productive to force a system restart in the midst of a person's workday. Is there something that I can do with the GPO that would help make WSUS less intrusive? Even if it gave the user an option to Restart Later - that would be better than what is happening now.

    Read the article

  • Printer spooler service stop running when sent print job

    - by Hanan N.
    Every time i am sending a print job to the printer, i am don't get any response from the printer, and at the printer job list at the status of the job, i see that there was an Error, but it don't give me any clue on what could be the problem. After some investigation i found that every time that i send the print job to the printer the printer spooler service stops to run, then after a second or two it start again (i think that this behavior is related to the printer spooler settings to rerun it self after it stops). Things that i have tried so far: Remove and Install again the Driver. After removing the driver, i have removed the unnecessary registry keys according to this article from Microsoft, these are: Rename all files and folders in: c:\windows\system32\spool\drivers\w32x86 Remove anything but Drivers Print and Processors: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Print\Environment\Windows NT x86 Remove anything in here: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Print\Monitors but: BJ Language Monitor Local Port Microsoft Document Imaging Writer Monitor Microsoft Shared Fax Monitor Standard TCP/IP Port USB Monitor WSD Port Disconnect and Reconnect the Printer. Clean the computer from Viruses & Spywares. Currently i am stuck, i have no more things to try, if anybody know about any kind of solution please let me know about it. Since i am want to keep this post as general problem that relate to the printer spooler, and not just my particular problem, i didn't included inside the windows version & the printer model, they are (although i think that it isn't relate just for that particular model): Windows 7 32bit, HP Officejet 4500 G510g-m (connect to the computer via USB). Thanks.

    Read the article

  • Parallels 6 - Is It Just Me or Does It Run Really Slowly?

    - by 5arx
    I've been running Parallels since version 2 with great success. I use it as my .Net development environment and over the last few years have converted so many others to the Parallels/Mac way of doing Windows/.Net development that I feel I should be getting perks/gifts and/or freebies from Parallels Corporation ;-) A month or so ago I upgraded to version 6 and ... immediately wished I hadn't. I'm currently running it on a laptop - a 2009 MacBook Pro (13"/2.53Ghz/4GB) while my MacPros at work and home are still running v5. I have seen nothing in v6 that makes me want to upgrade the install on those. The general problem is performance - upon starting or suspending a vm (always Windows 7 Ultimate), OS X slows down, quite often freezing for a minute or two at a time. The performance of the vms themselves are fine, but for me the point of this set-up is to be able to do web-browsing, email checking etc. on the OS X side of things while doing the stuff that can only be done on Windows (Visual Studio, SQL Server tools) on Windows. I have been using Parallels for a while so at least feel like I know what I'm doing so at the moment I am heading towards forming an opinion that its Parallels thats to blame. I've tweaked and tweaked all the vm configuration properties but to no avail. Support emails to the company have all received replies - there are no documented case of the issue you mention. Has anyone else seen this problem and if so, have you found a fix?

    Read the article

  • How to sandbox a VMWare image as much as possible

    - by Craig H
    The situation: -A corporate environment, with a corporate managed XP desktop (locked down, patched regularly, restricted user rights, no manual install of SW, AV, etc.) The requirement: -Using VMWare Workstation, run a sandboxed image (also XP) for specific testing purposes (with admin rights in the guest VM). No network connectivity is required. It can't be a separate standalone physical workstation disconnected from the network. (FWIW, this is a legitimate, sanctioned requirement - not someone trying to get around corporate restrictions.) The challenge: -Do this in as safe/secure a manner as possible. The proposed solution: -Create an image with host-only networking. -Perhaps remove the virtual ethernet adapter? (not sure if it's required for basic VMWare functionality?) The question (finally): -What potential risks remain (and how could I best mitigate them)? One challenge is that the guest VM will not be a managed workstation itself, so patching, AV, etc. can't be guaranteed (and, ironically, would in fact be somewhat difficult given the proposed solution!)

    Read the article

  • Where do vendors publish internal transfer rates of HDDs?

    - by red888
    So I've started to dig into storage fundamentals and found that in order to calculate the IOPS of a HDD you need to know the internal transfer rate of the drive (time it takes data to move from the platters to internal disk's cache). I went on newegg and even a few vendor sites and could not find this info published for any HDDs. Is it sometimes called something else? Take this link to a seagate HDD for instance. Nowhere do I see "internal transfer rate", but I do see something called "Sustained Data Rate OD"- is that the same thing? Just so you know where I'm getting this info (Book: "Information Storage and Management Storing, Managing..."): Consider an example with the following specifications provided for a disk: The average seek time is 5 ms in a random I/O environment; therefore, T = 5 ms. Disk rotation speed of 15,000 revolutions per minute or 250 revolutions per second — from which rotational latency (L) can be determined, which is one-half of the time taken for a full rotation or L = (0.5/250 rps expressed in ms). 40 MB/s internal data transfer rate, from which the internal transfer time (X) is derived based on the block size of the I/O — for example, an I/O with a block size of 32 KB; therefore X = 32 KB/40 MB. Consequently, the time taken by the I/O controller to serve an I/O of block size 32 KB is (TS) = 5 ms + (0.5/250) + 32 KB/40 MB = 7.8 ms. Therefore, the maximum number of I/Os serviced per second or IOPS is (1/TS) = 1/(7.8 × 10^-3) = 128 IOPS.

    Read the article

  • How to set umask globally?

    - by DevSolar
    I am using a private user group setup, i.e. a user foo's home directory is owned by foo:foo, not foo:users. For this to work, I need to set the umask to 002 globally. After a quick grep -RIi umask /etc/*, it seemed for a moment that modifying the UMASK entry in /etc/login.defs should do the trick. It does, too -- but only for console logins. If I log in to my desktop, and open a terminal there, I still get to see the default umask 022. Same goes for files created from apps started through the menu. Apparently, the display manager (or whatever X11 component responsible) does source some different setting than a console login does, and damned if I could tell which one it is. (I tried changing the setting in /etc/init.d/rc, and no, it did not help.) How / where do I set umask globally (and for all users), so that the X11 desktop environment gets the memo as well? (The system is Linux Mint / Ubuntu, in case that changes anything...)

    Read the article

  • vSphere - datastore falling off a host

    - by Chadddada
    Recently we have been running the vCheck powershell script daily in order to help in monitoring our vSphere ESX 4.0 environment. One of the oddities that we have been seeing is that some of the datastores on the SAN don't always show up on every host. Our hosts are connected redundantly, via FC, to some brocade FC switches, which then connect via fiber to our EMC Ax4 SAN. While all the datastores are presented to each host we have, and they see them initially, they sometimes seem to fall off and are no longer visible. It easy enough to rescan for datastores and add them back to the hosts the hosts but this seems to be an error. Has anyone else seen this or know why it may be happening? Responses to questions: 1. Is it always the same ESX servers that lose their connection? – Scott Warren No this happens randomly on random hosts. If a VM is running on a particular host, of which the VM's disks are on a SAN datastore, then that datastore won't disappear. It seems to happen if a host doesn't touch a datastore for a bit and it just forgets about it.

    Read the article

  • Simple vLAN setup

    - by Logan Bissonnette
    I have a basic lab environment set up to try and get 2 vLANs working in hyper-v. I have the following equipment 1 hyper-v server 1 Desktop PC 1 Managed Switch (d-link DES-3052P) 1 cheap router (DI-604) My end goal is to have 1 VM and the desktop on one vLAN with internet, and 1 VM on a separate vLAN with internet access. I am having troubles getting an internet connection to both vLANs. The switch does not have the ability to have asynchronous vLANs. This is my switch configuration Port 1 - Trunk Port - Connected to router Port 2 - Trunk Port - Connected to hyper-v Server Port 3 - Access Port- Connected to Desktop Within hyper-v I have 1 switch and 2 VMs. When the VMs are set up to use vlan ID 1, everything works fine. As soon as a VM is set up to use vlan ID 2, they lose all network connection and cannot communicate with the router anymore. I believe this is because the router is not vLAN aware. Can anyone help me with what settings need to be set up on my switch? I believe I want an egress rule so traffic leaving towards the router is untagged, is that right? If not, any ideas or hints as to what needs to be set up?

    Read the article

  • Best Asp.net Hosting

    - by dotnetguts
    There are many asp.net web hosting companies which spends lot on advertisement and also gives you very cheaper rate, as low as $5, but when it comes to support they are simply hopeless. Everyone can you please pass your experience with your past hosting companies and suggest any good asp.net hosting company? Please consider following requirement factors 1) Asp.net 3.5 or 4.0 supported. 2) Url Rewriter support 3) GZip support (Dynamic through code) 4) Initial Setup support (If required) 5) SQL Server 2005 or 2008 6) Allow to access SQL Server DB using SQL Mgmt Studio 7) Environment supporting Backup and Restore of DB on my own, without involving tech support team 8) Full Text Search support 9) FTP support 10) I can able to send atleast 500 Emails daily. 11) 99.9% Up Time (No matter all web hosting say they have 99.9% Up Time, but its not true). 12) Alert Email to be sent when they do any maintenance or during downtime. 13) Hosting Price should be reasonable. Incase you feel i am missing something please add to the list. Can anyone suggest good webhosting company based on above factors?

    Read the article

  • Nginx, as reverse proxy, could not proxy_pass to a domain pointing to the local JBOSS

    - by larryzhao
    My environment is Ubuntu 12.04, Nginx 1.20, and Torquebox 2.0.3 which is actually JBoss AS 7. I have two app deployed on Torquebox, it listens to 8080 and have different hostnames, app1.mydomain.com and app2.mydomain.com. I added 127.0.0.1 app1.mydomain.com and 127.0.0.1 app2.mydomain.com in /etc/hosts then I curl app1.mydomain.com:8080 and curl app2.mydomain.com:8080 both have correct return. Then I go to my nginx. I would like nginx to pass the visit to www.app1.com to app1.mydomain.com:8080, so I have the following configuration: # primary server - proxypass to torquebox server { listen 80; server_name www.app1.com; access_log off; error_log off; # proxy to Torquebox location / { proxy_pass http://app1.mydomain:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } But it doesn't work. curl www.app1.com returns nothing. And if I visit www.app1.com in Safari, the http return code is 404. I don't know why, need help.

    Read the article

  • Multiple Remote Desktop Connection in Windows Server 2003?

    - by Joel Bradley
    My company is transitioning all user PC's to Windows 7 64-Bit in anticipation of the 2014 cutoff for Windows XP support. So far everything has been going great except for one specific piece of software that will not run in Windows 7. The current plan is to give everyone a cheap secondary PC to run this software but I feel that's a little much for software that's not even used all the time, although it is essential. I've suggested we install virtual machines but the company does not want to pay for the XP licences. I have access to a copy of Windows Server 2003 that is no longer being used and I was wondering if it was possible to create a remote desktop server. I know it can be done on a one-to-one basis, but this is a 15 person helpdesk. I'd like to be able to support multiple remote dekstop sessions, each with their own logins and dekstops. Is this possible? Are there any other alternatives to my issue? FYI, I've been told that XP mode is only free for consumers. There are costs when used in a corporate environment.

    Read the article

  • What methods are available for updating a non-Internet-connected VMWare ESXi host?

    - by romandas
    I have a stand-alone installation of VMWare vSphere Essentials, with a vCenter Server and 3 ESXi 4.0 host servers. The environment is intended to remain as a stand-alone network, with the exception that I can "float" a workstation or server between the 'Net and the VMWare network for patches and maintenance. With other installations, where the Internet is available, I've used the vSphere Host Update utility to connect to VMWare and then apply the patches to the ESXi hosts. My problem is that this utility does not seem to function if it cannot connect to both VMWare and the ESXi host at the same time, as the scan for patches function will not scan the server without connecting to VMWare's site to sync its repository first. Even if I sync it, disconnect from the 'Net and connect to the VMWare network, it still won't scan hosts for required patches -- it will prompt for syncing with VMWare and if you click No to syncing, the scan does not occur. Does anyone know of other options for updating the ESXi hosts in some automated fashion? I believe I can manually pull down required patches and apply them, but this will not scale well, and in the future I'm sure I'll want something a bit more scalable.

    Read the article

  • ConfigMgr 2012 - How to automatically make updates available to computers without forcing them to be installed?

    - by Massimo
    I'm using System Center Configuration Manager 2012 with the Software Update Point feature; however, in this environment patching has to be strictly manual, because server reboots need to be approved and scheduled by different people; thus, I need to use ConfigMgr's SUP like I would use a plain WSUS server with auto-approval but with manual installation. I created some Automatic Deployment Rules to automatically download and deploy critical updates, and to have an installation dealine of "as soon as possible"; but then, I've also configured those rules to not do anything when the deadline is reached, and to not perform system restarts even if needed (see image). Also, I've configured the device collection to where those rules deploy updates to not have any valid maintencance window. However, I'm experiencing quite the opposite as what I was expecting: as soon as the new updates are processed by the ADRs, they get automatically installed on all systems by the Software Center, and the computers are subsequently restarted. Why is this happening? Am I getting something wrong or is just ConfigMgr 2012 not behaving like it should?

    Read the article

  • Java Deployment and Configuration (1.6.0_21)

    - by user125137
    Sofware: Java Runtime Environment 1.6.0_21 OS: Windows XP Professional 32-Bit, SP3 Situation: a new piece of web based software is being deployed this week and prior to this all the company desktops need to be set up to meet the requirements of this software. One of these requirements is JRE 1.6.0_21. I have successfully scripted the removal of all other Java versions and the installation of the required version, however I cannot get it configured properly. One of the requirements is that the Java console be set to disabled - if it is not it can cause an issue with a particular function. I have pushed out a deployment.config and deployment.properties but the console just will not disable itself.. I know the config is being read correctly because the update tab is being correctly disabled and removed. deployment.config: deployment.system.config=file\:C\:/WINDOWS/Sun/Java/Deployment/deployment.properties deployment.system.config.mandatory=true deployment.properties: #deployment.properties #Fri Jun 15 09:34:31 EST 2012 deployment.version=6.0 deployment.console.startup.mode=DISABLE deployment.javaws.autodownload=NEVER deployment.javaws.autodownload.locked= There is no change if I set the console to ENABLE either - it remains on the default of hidden. I'm sure I can disable the console with a registry change of some form but my preference is to have it done via the deployment files as it gives the option of centralising the properties file to a network share if we wish. If anyone has any suggestions it would be appreciated.

    Read the article

  • Hotmail mail delivery issue (spam)

    - by chaochito
    Hello, I am running a Postfix server in a dedicated server in a Linux environment (centOS 5.3) for a social networking web application and are experiencing deliverability issues with Hotmail (I can send mails to Gmail, Yahoo, Aol in inbox). I only send legit mails for registered users (notifications). I have SPF, DK and DKIM setup. I pass the Sender ID test when mailing to [email protected] but we have "X-Auth-Result : None" only in Hotmail headers and no X-SID-Result:Pass. We have been enrolled in their program for more than 2 weeks and normally when you apply to their Sender ID program you are supposed to have X-SID-Result:Pass and X-Auth-Result:Pass. I contacted Hotmail about the issue and they told me that my domain looks like added to Sender ID in their system this is beyond their support and asked me to contact my ISP. As you can imagine, my ISP has no clue about that either. I don't really know what could be wrong... Mails are currently filtered as spam and we would like to be able to have them landing in inbox.

    Read the article

  • Varnish does not start properly (crashes after startup) with no error messages

    - by Matthew Savage
    I am running Varnish (2.0.4 from the Ubuntu unstable apt repository, though I have also used the standard repository) in a test environment (Virtual Machines) on Ubuntu 9.10, soon to be 10.04. When I have a working configuration and the server starts successfully it seems like everything is fine, however if, for whatever reason, I stop and then restart the varnish daemon it doesn't always startup properly, and there are no errors going into syslog or messages to indicate what might be wrong. If I run varnish in debug mode (-d) and issue start when prompted then 7 times out of time it will run, but occasionally it will just shut down 'silently'. My startup command is (the $1 allows for me to pass -d to the script this lives in): varnishd -a :80 $1 \ -T 127.0.0.1:6082 \ -s malloc,1GB \ -f /home/deploy/mysite.vcl \ -u deploy \ -g deploy \ -p obj_workspace=4096 \ -p sess_workspace=262144 \ -p listen_depth=2048 \ -p overflow_max=2000 \ -p ping_interval=2 \ -p log_hashstring=off \ -h classic,5000009 \ -p thread_pool_max=1000 \ -p lru_interval=60 \ -p esi_syntax=0x00000003 \ -p sess_timeout=10 \ -p thread_pools=1 \ -p thread_pool_min=100 \ -p shm_workspace=32768 \ -p thread_pool_add_delay=1 and the VCL looks like this: # nginx/passenger server, HTTP:81 backend default { .host = "127.0.0.1"; .port = "81"; } sub vcl_recv { # Don't cache the /useradmin or /admin path if (req.url ~ "^/(useradmin|admin|session|sessions|login|members|logout|forgot_password)") { pipe; } # If cache is 'regenerating' then allow for old cache to be served set req.grace = 2m; # Forward to cache lookup lookup; } # This should be obvious sub vcl_hit { deliver; } sub vcl_fetch { # See link #16, allow for old cache serving set obj.grace = 2m; if (req.url ~ "\.(png|gif|jpg|swf|css|js)$") { deliver; } remove obj.http.Set-Cookie; remove obj.http.Etag; set obj.http.Cache-Control = "no-cache"; set obj.ttl = 7d; deliver; } Any suggestions would be greatly appreciated, this is driving me absolutely crazy, especially because its such an inconsistent behaviour.

    Read the article

  • Instructions to setup primary and only domain controller

    - by Robert Koritnik
    Where could I get best step by step instructions (with some simple explanations) how to setup domain controller on Windows Server 2008 R2 Server Core? I don't know what do I need? Do I need DNS as well and AD and so on and so forth. I don't know enough about these things, but I need to set them up to prepare development environment. I would also like to know how to configure firewall on DC machine, to make it visible on other machines because I've setup DC somehow but I can't connect to it... This is my HW config: Linksys internet router with DHCP my dev machine is Windows 7 my DC machine is a VM in my dev machine my dev machine has a hw network adapter to linksys and a virtual network adapter to DC DC machine has two network adapters: one to linksys (to be internet connected so it can be updated etc.) and one to host (my dev Win7 machine) Edit My development machine should access domain controller and logon using domain credentials. Development machine would access internet directly via Linksys router. My domain controller machine would only serve authentication (and if I'm able to configure it right) should also have Active Directory Federation Services in a workable condition. I hope this is a bit more clear now. At least a small bit.

    Read the article

  • Virtual IPv6 Network between VirtualBox VMs

    - by Ben
    I'm trying to create a virtual IPv6 network as a test environment. I have 5 VirtualBox VMs (Ubuntu Server) with network adapters using host-only networking. You can imagine them being connected in series and every machine connects 2 subnets. I want to ping the last machine from the first one: On: 2001:db8:aaaa::100 I want to ping 2001:db8:dddd::101 (Note: there is no cccc network in between) Only static configuration and routes are used: /etc/network/interfaces auto eth0 iface eth0 inet6 static address 2001:db8:aaaa::100 netmask 64 /etc/network/interfaces auto eth0 iface eth0 inet6 static address 2001:db8:aaaa::101 netmask 64 auto eth1 iface eth1 inet6 static address 2001:db8:bbbb::100 netmask 64 up ip -6 route add 2001:db8:dddd::/64 via 2001:db8:bbbb::101 dev eth1 down ip -6 route del 2001:db8:dddd::/64 via 2001:db8:bbbb::101 dev eth1 I thought there might be some automatic route discovery going on. Anyway, ping6 2001:db8:dddd::100 will not work from aaaa::100 When I add the route: ip -6 route add 2001:db8:dddd::/64 via 2001:db8:aaaa::101 it will work. But the next interface in the same network dddd::101 is not reachable. How could that be? There is a machine with an interface bbbb::101 and another dddd::100 and I can ping the latter one, but the machine connected to it, dddd::101 not?? I also have also turned on forwarding. Any ideas?

    Read the article

  • VMWare web UI intermittent access on CentOS

    - by PeteWilliams
    Hiya, I've got a CentOS 5.2 server that I'm trying to get set up as a development environment. As part of this, I planned to install VMWare Server 2 and set up several virtual development servers. I've got as far as installing VMWare Server 2 but access to the remote control panel is only working intermittently. If I access it through Firefox at https://127.0.0.1:8333/ui/# it usually says either: "Connection intterupted: connection was reset before the page loaded" Or "Firefox can't establish a connection to the server at 127.0.0.1" But every now and then it lets me in and I'll manage a few clicks in the web UI before it kicks me out with the following error: "The server could not complete a request (HTTP 0 ). The server encountered an unexpected condition that prevented it from fulfilling the request. If this problem persists, please contact your system administrator." I've done all the updates available in CentOS except one OpenOffice one that is causing a conflict, and I re-ran wmware-config.pl after updating the kernel. Though I went with all the defaults as I don't really know what I'm doing! I've since rebooted and nothing changed. I've also tried accessing the control panel remotely from another machine in the network and the results are the same. Does anyone have any ideas what might be causing this and how I can resolve it? I'm afraid I'm a developer playing at sys-admin, so I may be missing something obvious! Many thanks Pete Update I have now reinstalled both the operating system and VMWare and I'm still getting the same issue. I wonder if it's a result of the settings I'm putting in on the config.pl script..?

    Read the article

  • Value of Itanium or Sparc over x86_64 for Oracle Deployment

    - by Antitribu
    We are looking at a new environment to run our Oracle Database running on SUSE (potentially migrating to RedHat). Our database is approximately 100GB and performs adequately on our current hardware (x86_64) with approximately 6GB of ram allocated to it. We are growing quickly however and will require more performance shortly. Given the cost of Oracle licenses we would like to maximize the value from each license by choosing the most appropriate CPU to run the software on. The questions are: Are there substantial benefits to looking at Itanium or Sparc hardware, are there any drawbacks? Is there a point where one starts to scale out better? What are the long term support options for Itanium? Given the dominance of x86 would it be safer long term to stick with x86? On average what would be the performance benefit of implementing an Oracle database on Itanium or Sparc over x86_64? Is this an issue at all or will other factors (IO/RAM) cap out first? If anyone can point me towards some solid documentation on comparisons between the platforms that provides good case analysis of when to choose which I'm more than happy to accept that as an answer. Edit:- Added Sparc as an Option as it was previously not considered however with the recent Oracle Sun aquisition seems very relevant.

    Read the article

  • Upgraded users to Win7. Now getting "path not found" when saving files or opening attachments

    - by Matt Penner
    We have a Server 2008 AD environment with about 5k users. We just rolled out Windows 7 SP1 (were XP) with great success. However, about once a day we get a few calls that a user opens a file from their Documents (the folder is on the server and redirected), edits it and attempts to save but Win7 reports that the path is not found either because it doesn't exist or no permissions. The only way to fix it is to delete the profile. In addition we get about the same number but different users saying that they cannot open attachments from Outlook 2010 due to no permission. We have to edit the temp Outlook storage path in the registry to fix it (or delete the profile). I think the two issues may be related. What scares us is that we rolled out 1 month ago and had no calls of this nature until about 2 weeks ago. It started off as one or two but seems to be growing. Any ideas? We're going to open a Microsoft ticket but I wanted to seenif anyone else has run into this. Thanks!

    Read the article

  • sharepoint crawl not indexing main site

    - by user22215
    Guys I'm having some strange search issues' going on with my main portal application. First off let me give you a little back ground on the problem web app. Our Sharepoint environment was originally set up by a consultant that did not follow best practices. She used one web app to house our companies' intranet site, ssp, and mysites. Since than I have provisioned a new ssp that I have segmented correctly I moved all of our other sites over to the new ssp with out any problems . However, I could not assign the main portal app to the new ssp since the portal app housed the ssp site collection. So I deleted the ssp site collection after that I deleted the ssp and assigned the portal app to my new ssp. Now this is where the problem starts when I attempt to crawl this application the crawl starts than stops 5 seconds later with a status of success also it reports that 1 item was successfully crawled. The funny thing is the main portal app has nearly 30000 items. I have tracked the problem down to the web app if I create a test web app than restore the content I have no problem crawling all 30000 items. Also all of my other web apps that use the same ssp have no problem completing crawls. I don't see anything in the ULS logs or server 2003's event viewer. Also I'm using a separate dedicated index server that's configured to crawl itself via host file configuration. I would like to fix this problem with out having to recreate our main portal site due to the fact that we have several custom code modifications where DLL's were registered to the IIS bin folder also I don't even want to get into the Silverlight mods that were done. Any help with this problem is much appreciated Same problem as minehttp://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/MS-SharePoint/Q_23885820.html

    Read the article

  • Puzzling TCP performance over 3G / UMTS

    - by lemonsqueeze
    I'm using 3G as my primary internet connection, and TCP over this thing is getting more puzzling every day. For example: Downloading from kernel.org is crazy fast: $wget http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.6.8.tar.bz2 increases to ~500kB/s after a few secs ! Some servers are incredibly slow, for instance www.graphic-pc.com:Same thing, downloading a big file with wget it starts at ~30kB/s for a split second, then collapses to 5-10k or even worse. Web browsing is decent but somewhat unreliable. Randomly, a page will take really long to load or even fail to load, but a reload can succeed almost immediately. Now, by chance i started playing with OpenVPN over UDP on top of the 3G connection, and OMG suddenly everything's extremely fast !Same www.graphic-pc.com now shoots at 100-200kB/s ! What's going on here ??? How come it is so much better with the VPN than without ?? And why does graphic-pc.com crawl when kernel.org flies ?Something to do with my tcp stack (or the server), or some buggy router in between ?? Notes: Setup is laptop running Ubuntu Lucid and a Huawei 3G dongle (So direct pppd connection). I can reproduce this pretty much any time during the day and I'm not moving, so it's clearly not cell environment or internet congestion. (although kernel.org without VPN sometimes does worse in the evening, 60kB or so - but still 500kB with VPN !) For 2) wireshark shows retransmitted packets, dup ack's, even out of order sometimes. I've tried playing with different /proc/sys/net/ipv4 parameters (tcp_rmem, window_scaling, tcp_congestion...) doesn't seem to make a difference. Update: Tried under windows 7 (no VPN) with some interesting results: tcp settings : default tcp_optimizer kernel.org : 10 kB/s 20 kB/s graphic-pc.com: 8 kB/s 70 kB/s ! tcp_optimizer turned on ctcp among other things. Have to check what os graphic-pc.com is running, my bet is linux's tcp_westwood and ms ctcp don't mix well here...

    Read the article

  • How to allow users to transfer files to other users on linux

    - by Jon Bringhurst
    We have an environment of a few thousand users running applications on about 40 clusters ranging in size from 20 compute nodes to 98,000 compute nodes. Users on these systems generate massive files (sometimes 1PB) controlled by traditional unix permissions (ACLs usually aren't available or practical due to the specialized nature of the filesystem). We currently have a program called "give", which is a suid-root program that allows a user to "give" a file to another user when group permissions are insufficient. So, a user would type something like the following to give a file to another user: > give username-to-give-to filename-to-give ... The receiving user can then use a command called "take" (part of the give program) to receive the file: > take filename-to-receive The permissions of the file are then effectively transferred over to the receiving user. This program has been around for years and we'd like to revisit things from a security and functional point of view. Our current plan of action is to remove the bit rot in our current implementation of "give" and package it up as an open source app before we redeploy it into production. Does anyone have another method they use to transfer extremely large files between users when only traditional unix permissions are available?

    Read the article

  • Windows 8 Internet Explorer 11 proxy automation script

    - by Stefan Bollmann
    Similar to this post, I'd like to change my proxy settings using a script. However, it fails. When I am behind the proxy, IE does not connect to the internet. Here I try the first solution from craig: function FindProxyForURL(url, host) { if (isInNet(myIpAddress(), "myactualip", "myactualsubnetip")) return "PROXY proxyasshowninpicture:portihavetouseforthisproxy_see_picture"; else return "DIRECT"; } This script is saved as proxy.pac in c:\windows and my configuration is* in LAN settings: No automatically detected settings, yes, use automatic config script: file://c:/windows/proxy.pac No proxy server. So, what am I doing wrong? ---------------- update -------------- However, when I set up a proxy in my LAN configurations: IE -> Internet Options -> Connections -> LAN Settings check: Use a proxy Server for your LAN Address: <a pingable proxy> Port: <portnr> everything is fine for this environment. Now I try a simpler script like function FindProxyForURL(url, host) { return "PROXY <pingable proxy>:<portnr>; DIRECT"; } With a configuration described above** I am not able to get through the proxy.

    Read the article

< Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >