Search Results

Search found 21920 results on 877 pages for 'custom properties'.

Page 686/877 | < Previous Page | 682 683 684 685 686 687 688 689 690 691 692 693  | Next Page >

  • Ubuntu root privs installation issue

    - by Pam
    I am a fairly new Ubuntu user (and Linux user, for that matter) and I just downloaded a program whose installer was a .sh file. Not thinking, I copied the installer to an /opt subdirectory, thinking that I was going to install the application there: sudo cp ~/Downloads/fooInstaller.sh /opt/someDir I can't remember, but I either had to use sudo because /opt required it, or I just used it without thinking, but in any case, I prefixed with sudo. Once in /opt/someDir, I executed the installer again, using sudo: sudo sh fooInstaller.sh The terminal went crazy, and a few seconds later, a graphical install wizard popped up that guided me through the rest of the process. At the end of the wizard I was prompted to launch the program, and I did, and everything was great. Until... I closed the program, and attempted to add it to my Ubuntu "panel" (the icon panel at the top of the screen). The program was installed to /usr/local/foo/theProgram, and so I specified that URL as the command in the custom app launcher. When I open the program through the panel/launcher (at the top of the screen), the program doesn't load or operate correctly. I get a lot of error messages complaining about being denied permissions. I'm assuming that this is a "superuser/installation/privs" issue, and not a problem with the application (hence this post at superuser.com instead of the application's forums), because when I launch the program from the terminal with sudo, it opens and executes perfectly fine, just like it did the first time around after the install wizard finished. I realize I'm probably going to have to uninstall the program completely, and re-install it differently. Finally, my question: After uninstalling, can I avoid all these issue by just running the installer (sh fooInstaller.sh) right out of my Downloads directory, sans the sudo prefix? If not, how do I get the program to install without root privs so that I can add it to my panel/launcher and get it executing correctly? Sorry for the long post but I didn't want to omit any details because, as I'm sure you can tell, I'm not really sure I know what I'm doing. Thanks for any help here!

    Read the article

  • Excel not properly recalculating values

    - by gms8994
    I have an excel sheet with values in it (this sheet is generated by a custom perl script, but I don't think that's where the problem lies). In it, I have a formula: =sum(indirect(concatenate(address(6,column()),":",address(17,column())))) The purpose of this formula is to give me the SUM() of the cells in the current column, between rows 6 and 17. In Gnumeric Spreadsheet, as soon as I open the file, this works. But in Excel (both 2003 and 2007), opening the file gives #VALUE! errors in the fields with this formula, stating that the INDIRECT call with the values $B$6:$B$17 will result in an error. Here's the kink in the issue. If I edit the field (via F2), and make no changes, and hit enter, the values update. Also, it seems, if I save the file as .xlsx (Excel 2007 format), the values update upon opening. Unfortunately, I'm not sure that creating an xlsx is a possibility with the modules that I'm using, and many of our clients probably wouldn't be able to use it anyway. Any suggestions? Editing 200+ files every month for each client isn't going to be feasible, so if there's something I'm missing, please let me know.

    Read the article

  • Win 2008 R2 terminal server and redirected printer queue security

    - by Ian
    I have a case where I need a non-priv account to be able to make a modification to the redirected printer. I know, its not advisable but we're not giving them access - changes will be made in code. So, following the docs (http://technet.microsoft.com/en-us/library/ee524015(WS.10).aspx) I modified the default security for new printer queues. This doesnt work though as windows doesn't seem to assign the privs you configure in the printer admin tool to redirected printer queues. As I test I added a non-priv test user to the default security tab in the printer admin tool (control panel - admin tools - printer admin. I assigned it all privs (its a test) and logged the user into the terminal server. The redirected printers duely appeared as usual. However if I open the printer properties - security tab, the user appears in the list of accounts/groups but the options I selected (all privs) are not set. Instead the user special privs box is marked and when I click on 'advanced options' and view them, there is nothing marked. So, something is clearing these options.... the question is, why and how can I convince it not to? Ian

    Read the article

  • Listing group members using ldapsearch

    - by colemanm
    Our corporate LDAP directory is housed on a Snow Leopard Server Open Directory setup. I'm trying to use the ldapsearch tool to export an .ldif file to import into another external LDAP server to authenticate with externally; basically trying to be able to use the same credentials internally and externally. I've got ldapsearch working and giving me the contents and attributes of everything in the "Users" OU, and even filtering down to only the attributes I need: ldapsearch -xLLL -H ldap://server.domain.net / -b "cn=users,dc=server,dc=domain,dc=net" objectClass / uid uidNumber cn userPassword > directorycontents.ldif That gives me a list of users and properties that I can import to my remote OpenLDAP server. dn: uid=username1,cn=users,dc=server,dc=domain,dc=net objectClass: inetOrgPerson objectClass: posixAccount objectClass: organizationalPerson uidNumber: 1000 uid: username1 userPassword:: (hashedpassword) cn: username1 However, when I try the same query on an OD "group" instead of a "container," the results are something like this: dn: cn=groupname,cn=groups,dc=server,dc=domain,dc=net objectClass: posixGroup objectClass: apple-group objectClass: extensibleObject objectClass: top gidNumber: 1032 cn: groupname memberUid: username1 memberUid: username2 memberUid: username3 What I really want is a list of users from the top example filtered based on their group memberships, but it looks like membership is set from the Group side, rather than the user account side. There must be a way to filter this down and only export what I need, right?

    Read the article

  • Win7 - Some pinned program's icons are corrupt, show default

    - by Andrew Backer
    I have both FF 3.6 & Chrome pinned to my taskbar in win7. The icons for these two programs show up as the ugly default icon from yesteryear. Strange. Is there some way to force an icon refresh for pinned programs? When I first added them they showed properly, but several days later they reverted to this state Un-pinning the program causes the icon to show up properly, and re-pinning it causes it to break again. These other proggies show up fine: Media Center, Media Player Classic HC, Hulu Desktop, WMP, and the folders. I have 2 user accounts on this box, and both are showing this behavior. I have tried changing the taskbar icon size to 'small' and back, but to no effect. Edit (Add) The icons show up as broken in the start menu too, but I can navigate to the EXE directly. When I click "change icon" in the properties for the start menu entry I get the error : Can not find %ProgramFiles%\Google\Chrome...\chrome.exe.

    Read the article

  • How can I make Cygwin open a new window each time I use a Window 7 keyboard shortcut?

    - by Michael Gundlach
    [Update: The short answer is, if an application in the 3rd thing on your taskbar, press WindowsKey+Shift+3 to open a new instance. Hooray!] I have Chrome and Cygwin on my taskbar. Chrome's shortcut is Ctrl-Alt-C (as set through right clicking the icon and putting Ctrl-Alt-C into Chrome - Properties - Shortcut Key). Cygwin's shortcut is Ctrl-Alt-T. When I press Ctrl-Alt-C, I get a new Chrome window. Great! It's as if I had shift-clicked on the Chrome icon. When I press Ctrl-Alt-T, I get a Cygwin window the first time, but after that I just get focused on the Cygwin window. As if I had simply clicked on the Cygwin icon and not shift-clicked. As if Cygwin wasn't able to have more than one instance open. I can still shift-click the icon to get more instances. I've tried with different keys than Ctrl-Alt-T and got the same behavior. Strangely, I've twice managed to get it into a state (via just clearing and setting the shortcut key over and over) where a shortcut WOULD open multiple instances -- but it was Ctrl-Alt-G both times, which doesn't make sense to my brain which has been trained to use Ctrl-Alt-T for years. Ctrl-Alt-G usually behaves just as poorly as Ctrl-Alt-T, except for the two times when magically it started behaving properly. So I'm thinking this is a Windows 7 bug (which has existed since Windows XP at least), but I'm hoping someone knows something I don't :)

    Read the article

  • How do I configure permissions for a cluster share using Powershell on 2008?

    - by Andrew J. Brehm
    I have a cluster resource of type "file share" but when I try to configure the "security" parameter I get the following error (excerpt): Set-ClusterParameter : Parameter 'security' does not exist on the cluster object Using cluster.exe I get a better result, namely the usual nothing when the command worked. But when I check in Failover Cluster Manager the permissions have not changed. In Server 2003 the cluster.exe method worked. Any ideas? Update: Entire command and error. PS C:\> $resource=get-clusterresource testshare PS C:\> $resource Name State Group ResourceType ---- ----- ----- ------------ testshare Offline Test File Share PS C:\> $resource|set-clusterparameter security "domain\account,grant,f" Set-ClusterParameter : Parameter 'security' does not exist on the cluster object 'testshare'. If you are trying to upda te an existing parameter, please make sure the parameter name is specified correctly. You can check for the current par ameters by passing the .NET object received from the appropriate Get-Cluster* cmdlet to "| Get-ClusterParameter". If yo u are trying to update a common property on the cluster object, you should set the property directly on the .NET object received by the appropriate Get-Cluster* cmdlet. You can check for the current common properties by passing the .NET o bject received from the appropriate Get-Cluster* cmdlet to "| fl *". If you are trying to create a new unknown paramete r, please use -Create with this Set-ClusterParameter cmdlet. At line:1 char:31 + $resource|set-clusterparameter <<<< security "domain\account,grant,f" + CategoryInfo : NotSpecified: (:) [Set-ClusterParameter], ClusterCmdletException + FullyQualifiedErrorId : Set-ClusterParameter,Microsoft.FailoverClusters.PowerShell.SetClusterParameterCommand

    Read the article

  • DEB: "Provides:" field ignored

    - by Creshal
    I need to replace a package with a custom one, which gets its own name (foo-origpackage). To allow it to be used as drop-in replacement, I added the Provides: origpackage line to the control file. apt-cache show foo-origpackage lists the "Provides" entry just fine. However, when I want to install a file depending on origpackage, it fails ("Package origpackage not installed"). Is there some distinction between "real" and virtual packages I'm missing? EDIT: To be precise, what I want to replace is xen-utils-common for Squeeze. My tao-xen-utils-common has the following control file: Source: tao-xen-utils-common Section: kernel Priority: optional Maintainer: Creshal <[email protected]> Build-Depends: debhelper Standards-Version: 3.8.0 Homepage: http://tao.at Package: tao-xen-utils-common Architecture: all Depends: gawk, lsb-base, udev, xenstore-utils, tao-firewall Provides: xen-utils-common Conflicts: xen-utils-common Replaces: xen-utils-common Description: Xen administrative tools - common files (modified) The userspace tools to manage a system virtualized through the Xen virtual machine monitor. Modified for use with TAO Firewall. Installing xen-utils-4.0 fails, however: foo@bar# apt-cache showpkg tao-xen-utils-common Package: tao-xen-utils-common Versions: 4.0.0-1tao1 (/var/lib/apt/lists/repo.tao.at_dists_stable_main_binary-amd64_Packages) (/var/lib/dpkg/status) Description Language: File: /var/lib/apt/lists/repo.tao.at_dists_stable_main_binary-amd64_Packages MD5: 7c2503f563fca13b33b4eb3cbcb3c129 Reverse Depends: tao-firewall,tao-xen-utils-common tao-firewall,tao-xen-utils-common Dependencies: 4.0.0-1tao1 - gawk (0 (null)) lsb-base (0 (null)) udev (0 (null)) xenstore-utils (0 (null)) tao-firewall (0 (null)) xen-utils-common (0 (null)) xen-utils-common (0 (null)) Provides: 4.0.0-1tao1 - xen-utils-common Reverse Provides: foo@bar# apt-get install xen-utils-4.0 Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: xen-utils-common Suggested packages: xen-docs-4.0 The following packages will be REMOVED: tao-xen-utils-common The following NEW packages will be installed: xen-utils-4.0 xen-utils-common Edit:foo@bar# apt-cache policy xen-utils-4.0 xen-utils-4.0: Installed: (none) Candidate: 4.0.1-4 Version table: 4.0.1-4 0 500 http://ftp.at.debian.org/debian/ stable/main amd64 Packages 4.0.1-4 0 500 http://security.debian.org/ stable/updates/main amd64 Packages

    Read the article

  • An easily customizable linux distribution using minimal disk space?

    - by Frank
    I'm looking for a linux distribution that can be easily used to create my own distribution that's the same system with some software installed. So basically I should be able to create an iso which, when installed, will have the linux distribution with my desired installed. More specifically, I plan on installing mysql and a bit of my own software which shouldn't be too big. However, this distribution needs to be extremely small in terms of disk space. The distribution, including mysql should not exceed 100mb. It should, of course still be able to connect to the internet and perform other standard functions. I don't need X/any sort of window manager, and would prefer not to have it since it would increase disk usage. Currently I have tried ttylinux and tiny core linux. I've found that ttylinux, while is extremely small, has almost nothing so that mysql can't even be installed. Tiny core linux, on the other hand is a bit too big. I've found openembedded and linux from scratch, but I would prefer for the install and build process to be much easier. What other distribution would you recommend for my purposes? Minimizing disk usage is the most important, followed by ease of installing and creating the custom distribution.

    Read the article

  • Incorrect deployment of WSGI app to AWS using Elastic Beanstalk

    - by Dzmitry Zhaleznichenka
    cross-link to AWS forums I have developed a simple Python web service using WSGI and would like to deploy it to AWS cloud using Elastic Beanstalk. My problem is I cannot make all the options I specify in Elastic Beanstalk configuration to be correctly configured in the cloud. For deployment, I use Elastic Beanstalk CLI utility. I have run eb init command and set up the required parameters. After this, a directory named .elasticbeanstalk was created in my source tree. It has two config files that are used for deployment, namely config and optionsettings. The latter one among the other options contains the WSGI configuration that has to update /etc/httpd/conf.d/wsgi.conf at the instances. After some of my adjustments the file has the following settings: [aws:elasticbeanstalk:application:environment] DJANGO_SETTINGS_MODULE = PARAM1 = PARAM2 = PARAM4 = PARAM3 = PARAM5 = [aws:elasticbeanstalk:container:python] WSGIPath = handler.py NumProcesses = 2 StaticFiles = /static= NumThreads = 10 [aws:elasticbeanstalk:container:python:staticfiles] /static = static/ [aws:elasticbeanstalk:hostmanager] LogPublicationControl = false [aws:autoscaling:launchconfiguration] InstanceType = t1.micro EC2KeyName = zmicier-aws [aws:elasticbeanstalk:application] Application Healthcheck URL = [aws:autoscaling:asg] MaxSize = 10 MinSize = 1 Custom Availability Zones = [aws:elasticbeanstalk:monitoring] Automatically Terminate Unhealthy Instances = true [aws:elasticbeanstalk:sns:topics] Notification Endpoint = Notification Protocol = email It turns out that not all of these options are considered when I start the environment or update it. Thus, when I update NumThreads or NumProcesses, the respective parameters get changed in wsgi.conf as expected. But whatever I write to the WSGIPath and StaticFiles parameters, I'm not able to automatically change the respective values of wsgi.conf, they remain Alias /static /opt/python/current/app/ WSGIScriptAlias / /opt/python/current/app/application.py which drives me nuts. Moreover, when I deploy my application using git aws.push and having the following contents of .ebextensions/python.config file, neither of options I specify in it affects the deployment. option_settings: - namespace: aws:elasticbeanstalk:container:python option_name: WSGIPath value: mysite/wsgi.py - namespace: aws:elasticbeanstalk:container:python option_name: NumProcesses value: 5 - namespace: aws:elasticbeanstalk:container:python option_name: NumThreads value: 25 - namespace: aws:elasticbeanstalk:container:python:staticfiles option_name: /static/ value: app/static/ I wonder what I should do to force AWS use all the parameters I specify in the configuration, namely the WSGI Path and path to my static data.

    Read the article

  • MySQL Workbench is not finding MySQL service? [closed]

    - by PhADDinTraining
    I set up a local MySQL server, currently with no databases. I'm trying to create a new server instance profile in MySQL Workbench to manage the server, and during the Create New Server Instance Profile wizard, it gets to the Windows Management section and tells me that No MySQL service found. I went into Task Manager and found the process mysqld.exe to be running, under the user name of NETWORK SERVICE. Then I went into the Services tab and found that MySQLServerName (I custom named the Windows service) is also there, and status is running. I ran cports and looked at what ports mysqld.exe is using, and ran a telnet command on that port. It's reporting that the port is being listened in on. I then ran the MySQL Command Line Client to be sure, and after \r it gives me a proper connection ID and a list of databases (NONE at this point). But with all this, I can't make the wizard find a running service. I've Googled this and found no answers, so please, if someone would help shed some light on this issue that'd be great!

    Read the article

  • Exchange 2010: Find Move Request Log after move request completes

    - by gravyface
    EDIT: significantly changed my question here to streamline it a bit. I've gone ahead and used 100 as my corrupted item count and ran it from the Exchange Shell. So the trail of tears continues with my SBS 2003 to 2011 migration: all the mailboxes have moved mailbox store from OLDSERVER to NEWSERVER, with the Local Move Requests completing successfully, except for one. What I'd like to do now is review the previous move request log files: when they were in progress, I could right-click Properties Log View Log File, but now that they're completed, that's not available. Nor can I use: Get-MoveRequestStatistics <user> -includereport | fl MoveReport ...as the move request has now completed and it errors out with "couldn't find a move request that corresponds...". Basically what I'd like to do is present the list of baditems to the user so that they're aware of what items didn't come across and if anything important was lost, be able to check their current OST, an archive.pst, etc. to recover it if possible. If this all needs to be wrapped up in a batch Exchange power shell command to pipe the output to log files on disk somewhere, I'm all ears, and would appreciate it for the next migration we do.

    Read the article

  • Question about Displaying Documents and the CQWP in MOSS 2007

    - by Psycho Bob
    My organization is in the process of converting our intranet over to a SharePoint solution. Part of this intranet will be the movement and organization of all our internal documents. Currently, we have 11 pages of document links, each with its own subheadings. So far I have it set where each document has a custom field called "Page" with a check box list of all the document pages on the intranet site. On each individual page, I have setup a Content Query Web Part that displays the documents that have the corresponding Page value set (i.e. if a document Page value has been checked for "HR" it will appear on the HR page). The goal of this setup is to allow the nontechnical personal who will be responsible for the maintenance of the documents to be able to upload new documents to the documents list and note on which pages they should appear on without having to manually update the pages themselves. The problem that I am having is that I cannot seem to find a good way to sort the documents into their subheadings once they are on the appropriate page. I could create individual check boxes for each page/subheading combination, but this would create a list of approximately 50-75 items. Does anyone have any ideas as to how I could accomplish this, either via CQWP or by different means? Goals/Requirements of Installation Allow Intranet documents to be maintained by nontechnical personnel Display documents on the appropriate pages without user having to edit actual page or web part Denote document page location using user settable document attributes (if possible) Maintain current intranet organization and workflow Use only one document list without subdirectories NOTE: I am aware that this is not the most efficient or elegant way to do things, but these are the requirements I have been given for the project.

    Read the article

  • Cannot find "IIS APPPOOL\{application pool name}" user account in Windows Server 2008

    - by MacGyver
    Normally when setting up IIS 7, I'm used to allowing permissions to user IIS APPPOOL\{application pool name} on the root folder of my web application(s). I also give permissions to IUSR (or the IIS_IUSRS user group. (Note, in Windows Server 2008, I found that IUSR isn't in that group by default, so I added it). In Windows Server 2008, I cannot find user IIS APPPOOL\{application pool name} under Security under the Windows Folder Properties. I'm using Windows Authentication in ASP.NET. I'm receiving a 401.1 on the page in Internet Explorer 8 after getting the authentication prompt. Mozilla Firefox also gave me a Windows authentication prompt, and got me into the site fine. Same with Google Chrome. How can I solve this one? HTTP Error 401.1 - Unauthorized You do not have permission to view this directory or page using the credentials that you supplied. Specific page information: Module: WindowsAuthenticationModule Notification: AuthenticateRequest Handler: PageHandlerFactory-ISAPI-4.0_32bit Error Code: 0x8009030e Requested URL: http://.....aspx Physical Path: C:\.........aspx Logon Method: Not yet determined Logon User: Not yet determined

    Read the article

  • Internet Connection not working - USB LAN connection - from particular modem

    - by Paul
    I am trying to fix Internet connection on a friends Dell inspiron 1720 with XP service pack 3. It has an integrated network card that stopped working, after powering down/up the modem still didnt work I brought it back to my place to try a few things ie check cable, update driver etc... still didnt work. So I bought a USB LAN connector. It didnt work straight away but I went to configure the properties and changed the ConnectionType from AutoSense to 100 BaseT 10BaseT Full_Duplex, I basically just tried them all. From my place when connected to my desktop - 10 BaseT and 10BaseT Full_Duplex worked. From my place When connected to their laptop - 10 BaseT and 10BaseT Full_Duplex worked. Happy I went back to my friends house confident it would all work, and it didnt. Brought it back to mine and it did. While there, in Network Connections the connection is there recognized, enabled, 'working properly' it just says not connected. Also there is no led on the USB connector While at mine as above except there is an led on the USB connector and it says connected. Other difference I can think of is they have a cable modem, I'm plugged into the back of a Belkin wireless router - would this make a difference? Any other ideas what to try? (Would getting the model of the cable modem help anyone?) The USB connector is "DM9601 USB to Fast Ethernet"

    Read the article

  • Best way to 'harden' embedded ext4 file server against unexpected loss of power?

    - by Jeremy Friesner
    Hi all, First, a little background: my company makes an audio streaming device that is a headless, rack-mounted Linux box with a couple of SSDs attached. Each SSD is formatted with ext4. The users can connect to the system using Samba/CIFS to upload new audio files or access existing ones. There is also custom software for streaming out audio over the network. This is all fine. The only problem is that the users are audio people, not computer people, and see the system as a 'black box', not as a computer. Which means that at the end of the day, they aren't going to ssh in to the box and enter "/sbin/shutdown -h"; they are just going to cut power to the rack and leave, and expect things to still work properly the next day. Since ext4 has journalling, journal checksumming, etc, this mostly works. The only time it doesn't work is when someone uploads a new file via Samba and then cuts power to the system before the uploaded data has been fully flushed to the disk. In that case, they come in the next day and find that their new file has been truncated or is missing entirely, and are unhappy. My question is, what is the best way to avoid this problem? Is there a way to get smbd to call "sync" at the end of every upload? (Performance on uploads isn't so important, since they only happen occasionally). Or is there a way to tell ext4 to automatically flush within a few seconds of any change to a file? (Again, performance can be sacrificed for safety here) Should I set a particular write-ordering mode, activate barriers, etc?

    Read the article

  • Cache updates when migrating DNS from one provider to another

    - by JohnCC
    This may be a Windows DNS specific question or a general DNS best practice question - I'm not sure! We migrated our 3rd party DNS provision from provider A to provider B. I noticed that our internal recursive windows DNS servers still had NS records cached for our domains pointing to provider A's servers, even though I changed the nameservers with our registrar several days ago, and even though selecting the properties of the cached records showed a TTL of 1 day. After 24 hours when the NS records in this cache have expired, will the DNS server go back to the TLD server for an update on the authority, or will it go by preference to dns1.providera.com since that is what it has cached? In this case I arranged to leave Provider A's servers up for a week to allow changes to propagate, so dns1.providera.com is still active and would still provide NS and SOA records that said that dns1.providera.com. was in charge of this domain. Given this fact, would the Windows DNS server ever go back to the TLD and pick up the authority changes, or would it just assume all was well and renew timestamps on its cached NS records? I wonder what would be the best approach to ensuring that caches pick this up. Should I:- (1) Leave Provider A's servers in place and active and wait for caches to catch up ... basically what we're doing now which seems to have issues - perhaps specifically for Windows servers, or perhaps more widely. (2) Leave Provider A's servers in place but change the NS and/or SOA information they provide to tell caches that new servers are in charge. (3) Remove Provider A's servers after 2*TTL to force remaining caches to update. The issue with (2) is that on Provider A's system I can't seem to change the NS or SOA information to anything other than their servers. The issue with (3) is that I'm not sure how a DNS server would behave in this case. When it couldn't reach the cached name servers, would it flush its cache and try a full recursive lookup, or would it just return an error, forcing the user to clear the cache manually? Thanks in advance!

    Read the article

  • What would cause an IIS6 website to be unavailable remotely randomly for a few minutes at a time?

    - by jskunkle
    Website is served by iis6 on windows server 2003. Never saw this problem once for months in beta. We made the new site live yesterday - its getting more traffic than in beta but not that much - resource utilization on the server and speed are fine. Today the site has been unavailable remotely a few (4?) times for a few minutes at a time. If you visit any page on the site - nothing is ever returned and eventually the request times out. While this is happening - I can connect to the server via remote desktop and the site loads fine from the live url when running a browser on the server locally. Other websites on the server continiue to function fine the entire time (using the same instance of iis, different app pools). Other computers on the same network can't access the website either. Other than not serving content - the server seems to behave normally - scheduled jobs in our custom job system continue to run, etc. We've looked at the iis logs quickly and we don't see any traffic out of the ordinary - no traffic spikes, etc. Any ideas? Thanks, Shane

    Read the article

  • Map FTP folder to folder on different FTP server

    - by jolt
    In my team we work a lot with FTP. We upload and download files from several different servers daily. Currently every member of the team manages access credentials to each FTP server locally on their own machine. I am looking for a way to set up a central FTP server that we can connect to, and from there, navigate to folders that each represent one of the other FTP servers that we connect to daily. Something like this: In-house central FTP server: |- FolderA --> server A root folder |- FolderB --> server B root folder |- FolderC --> server C root folder A setup like this, would mean that we can manage access credentials on the central FTP server, and team members would only need to have the access credentials to the central FTP server, and from there they could navigate to the other servers through these "virtual" folders. We could potentially develop our own custom FTP server that just forward requests to the remote FTP servers, but i feel like something like this (or something similar) would already have been done. So I'm looking for pointers that could help us find software for Windows that could help us to simplify our current setup. Thank you! Similar (unanswered) question here: FTP management server

    Read the article

  • What permission(s) does an application pool identity required to manage other application pools?

    - by Mr Shoubs
    I have a web site (used to manage various parts of our software) that needs the permissions required to start/stop other application pools. I've created a user and set the app pool identity to custom, however the web app still can't start/stop the app pools. I get the following Error: System.UnauthorizedAccessException: Filename: redirection.config Error: Cannot read configuration file due to insufficient permissions at Microsoft.Web.Administration.Interop.AppHostWritableAdminManager.GetAdminSection(String bstrSectionName, String bstrSectionPath) at Microsoft.Web.Administration.Configuration.GetSectionInternal(ConfigurationSection section, String sectionPath, String locationPath) at Microsoft.Web.Administration.ServerManager.get_ApplicationPoolsSection() at Microsoft.Web.Administration.ServerManager.get_ApplicationPools() Discussion here suggests setting the application pool to local system or administrator, this does work, but I don't want to do this for security reasons (external support will need access this site). I did give the user higher permissions (as suggested here), starting by making it part of the local administrators group, but initially this didn't work, and giving the user read/write/mod permission on C:\Windows\System32\inetsrv\config also didn't work. I must have done something wrong as local administrator now works, however this still isn't what I want. So can anyone suggest the permissions I need to add to this user, and how can I apply them? An answer my problem (but different question) is here, but to clarify, I think I need to give an individual user "IIS Runtime Operation Permissions", does anyone know how to do this, if indeed this is the permissions I require?

    Read the article

  • Is it possible to stop the "Beep" device on all computers in a Windows network?

    - by sharptooth
    On Windows the default behavior it to make an annoying "beep" sound every time Windows things something notable happened. The result is that when someone send a company-wide email via the MS Exchange all computers around my cubicle beep one by one. This is annoying and makes no sense. Luckily beeping can be shut off. Someone has to: open the "Device Manager", select "View - Show hidden devices", find the "Beep" device in "Non-Pug and Play Devices" node, open its properties, go to the "Driver" tab, set "startup type" to "Disabled" and click "Stop". The "Beep" device will stop and no longer produce the useless sound. This solution however requires tracking every computer and then talking to its user which is not very convenient. Device Manager doesn't allow stopping a device on another computer. I'm looking for a solution that can be deployed by the administrators team. We have a domain and the administrators even install the programs company-wide automatically. Are there any means to stop the "Beep" device an all computers in the Windows network with some remote-administration features automatically?

    Read the article

  • Dual boot windows 8 pro and windows 7 on XPS 8500 Special Edition

    - by Jesse
    I am trying to install a dual boot with windows 7 premium and windows 8 Pro on an XPS 8500 special edition. I created a new primary partition on my C: drive, inserted the windows 8 install disk, and rebooted my computer from DVD. I select custom install and the dialog box saying "Where do you want to install windows at?" pops up but none of my drives are listed. Please help me determine what is going on. I don't understand why none of my drives are showing up on this menu. Not even the original drive. When I go to load driver and click on the partition I created it tells me "No signed device drivers were found. Make sure the installation media contains the correct drivers, and then click OK." resolved above issue by running setup from the source folder on the install disk instead of booting from DVD. Was able to locate my new partition and start install. It completes the first step of "Copying windows files" just fine but then on the next step "Getting files ready for installation" my computer restarts and attempts to load windows 8 but keeps telling me my pc needs to restart. This keeps going on in an infinite boot loop. Please help, this has been a nightmare!

    Read the article

  • Loop through several servers, find specific dlls , get the dll version, internal filename and path?

    - by Graham
    I am a newby to Powershell, and using PS v2. I can see the massive potential it has, but I just can't get the following code to work fully. I am trying to end up with a csv file that contains the wild carded required dlls in the GAC_MSIL or sub-directory, get the dll version, internal filename and path, and the server IP address. The code is below, and it is in single line format because it appears easier to remote onto one of the servers in the server farm and run the single line from that console, ue to security log-ins etc. The code has produced a set of results, but only for the last server, it probably does the first server, then overwrites it but I am not sure about that. I have done a lot of reading about using arrays, and custom objects, and had a go at doing that, but my scripting skills in PS are not yet up to it. Code: $out = "Ouput_dll_ver_results.csv";foreach ($server in '11.222.33.123', '11.222.33.124') {$VersionInfo = (Get-ChildItem \$server\C$\windows\assembly\GAC_MSIL -recurse -Include abc*.dll,def*.dll,ghi*.dll,jkl*.dll | Where-Object { $.FullName -notmatch "\windows\assembly\temp\" })}; $VersionInfo | %{Get-Command $.FullName} | select -expand File* |Export-Csv $out Can you please advise if/how the above code can be corrected, and if not, what alternatives do I have to get the information I need. Many thanks in advance. Graham

    Read the article

  • Network bridge does not work on Windows 7

    - by D. Strout
    I am trying to use Windows 7 to bridge my wireless and wired connections together to get wireless on a second Windows XP computer (fresh install). The network bridge is created successfully, but when I connect the cable from the first computer with the bridged connections to the second one with just a wired connection, nothing happens. The second computer doesn't connect, and the first computer shows no sign of anything different. I tried bridging the connections of a third computer, and connecting this computer to the second computer. This worked (third computer bridged the wireless to the second computer). Thus, the problem must be with the first (Win 7) computer. However, I have no idea what the problem could be. All Internet Connection Sharing is turned off. Homegroup is disabled (it was originally enabled, I thought that might be a problem, so I disabled it). Also, I had VMWare fusion installed, and that created extra items in the "This connection uses the following items" box in the properties dialog. Thinking this might be causing issues, I uninstalled that too. Still, with everything I tried, I can't get it to work. Any suggestions? Edit: Another thing I noticed that might be worth mentioning: The network icon in the Win7 taskbar has a red X on it that means it's not connected, but when I click the icon, it says connected next to my wireless connection, and I am able to access the internet.

    Read the article

  • NX Client for Windows 7 Opens Remote Desktop in Multiple Windows

    - by Corey Kennedy
    What I'm trying to do: access my Ubuntu desktop remotely via NX Client on my Windows 7 laptop. My environment: server: Ubuntu 10.10 on AMD 1Ghz/512MB RAM PC client: Windows 7 on ThinkPad sl510 Software: server is running NXServer 3.4.0. Using xfce4 window manager. Laptop is using NXClient for Windows In my NX Client "Desktop" settings I've selected "Unix" and "Custom" for OS and environment. I've also specified "startxfce4" as the application to launch when NX connects. I am able to authenticate an NX session on my laptop. By this I mean, I can start the client on my laptop, enter credentials for my Linux user, and NX establishes a connection to the server and attempts to open a remote desktop window. The problem, though, is that this remote desktop is "fragmented" into many Windows. One window will display the bulk of my desktop (complete with desktop icons for "Home," "File System," and "Trash") while another window will contain the taskbar, and another window will contain the application strip. I can select each of these Windows individually, but I cannot click on any objects within them. I've searched Super User, Ubuntu Forums, NX help, Server Fault, and tried many Google searches - none have turned up another case of this particular problem. I'm stumped. Does anyone have any suggestions for what I might try? I'm guessing the problem has to do with my xfce config files, but I've only just setup this server - it's been a long time since I've used Linux and there's a lot I just don't know. What I am NOT trying to do: use Desktop sharing from Ubuntu, whereby I VNC into a desktop that I've already established on the server. I am trying to configure this Linux box as a headless server that I can stash someplace out-of-the-way in my house, then interact with through my laptop. I don't want to have a monitor or keyboard connected to the Linux box. Thanks for your help!

    Read the article

< Previous Page | 682 683 684 685 686 687 688 689 690 691 692 693  | Next Page >