Search Results

Search found 40159 results on 1607 pages for 'multiple users'.

Page 540/1607 | < Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >

  • Is there a plugin for [Path] Finder to browse zip-archives as folders?

    - by Andrei
    Hi, I am migrating from Windows to OS X and looking for a good way to browse zip, rar etc. archives. Ideally, I need a plugin for Finder which will allow me to open archives as folders. Is there one, or any other suitable solution? Preferably free Update As I understand, most of Mac users are using Path Finder app for file management. It is an awesome program, however surprisingly it also doesn't have such functionality. I guess, the problem is in the way of thinking – my Windows-thinking is not applicable to Mac. Here are some threads for other former Windows users to push Cocoatech in a right direction: http://forum.cocoatech.com/showthread.php?t=2883 http://forum.cocoatech.com/showthread.php?t=5167

    Read the article

  • Build a user's profile directory on creation in batch

    - by Moses
    I have a batch script that I use when I set up new Windows 7 PCs that creates a user based on a variable, creates a folder on their desktop, then shares it: @echo off SET /p unitnumber="Enter unit number: " net user unit%unitnumber% password /add /expire:never MD "C:/Users/unit%unitnumber%/Desktop/Accounting #%unitnumber%" runas /user:administrator "net share "Accounting#%unitnumber%"="C:/Users/unit%unitnumber%/Desktop/Accounting#%unitnumber"" I discovered that the share that is created is overwritten when the newly created user first logs on, because Windows creates builds their profile directory at that time. Is there any way to initiate a build of a user's profile directory in the batch file just after creating the it? The only thing that looks useful is the /homedir:pathname switch for the net user command, but I believe that option assumes the directory already exists. Other than that web research hasn't been fruitful. I'd be to use whatever to get this done as long as I can incorporate/launch it from the batch. Any suggestions?

    Read the article

  • Forms Authentication across Sub-Domains on local IIS

    - by Parminder
    I asked this question at SO http://stackoverflow.com/questions/8278015/forms-nauthentication-across-sub-domains-on-local-iis Now asking it here. I know a cookie can be shared across multiple subdomains using the setting <forms name=".ASPXAUTH" loginUrl="Login/" protection="Validation" timeout="120" path="/" domain=".mydomain.com"/> in Web.config. But how to replicate same thing on local machine. I am using windows 7 and IIS 7 on my laptop. So I have sites localhost.users/ for my actual site users.mysite.com localhost.host/ for host.mysite.com and similar.

    Read the article

  • open source knowledge base CMS system

    - by Thomi
    I'm looking for an open source knowledge base system that uses tags, rather than free-text search to identify articles (a lot like serverfault does). I've looked at twiki, which many people suggested, but haven't found what I'm looking for. Basically I want to be able to create and tag articles, and provide an easy way for anonymous users to search based on tags. Edit: OK, here's some more detail regarding what I want. Basically, all the knowledge base systems I have seen so far are a collection of articles, each article with a title. Most of them allow you to categorise articles into groups and sub-groups. Users of the system can search for information using a title search, for example "How do I print from AwesomeProduct?" - which then shows a list of any articles that match that search text. This is fine and dandy when your KB is for one version of the software product (the mythical AwesomeProduct ver 1.0). However, the development team then go ahead and create a new version (ver 2.0) that adds many new features and changes some existing features. Now, how do we support both products in the same KB? The Naive method is to copy all articles from 1.0, and update them for 2.0, adding and removing articles in 2.0 as required. We can then add text at the top of every 1.0 article that says: "this articles applies to 1.0 only, to see the 2.0 version, click here" (or something similar) The problem with articles being indexed in the system by title is that it's very hard to filter based on meta-data like version. What happens when we create version 3.0 or 4.0? The end-situation here is that you have a mess of articles. They're hard to search, hard to filter, and even harder to manage. The solution (it seems to me) is to use tags, rather than text as the article index mechanism. So articles can be tagged with a tag representing the software version, topic area etc. etc. Users can then filter based on tag - an example search might be "version_1 printing" - which straight away gives a list of articles with all these tags. So that's what I'm looking for - a KB system that uses tags, rather than text to index many articles. I'm sure I could build something with drupal, but I was hoping for something that worked out-of-the-box.

    Read the article

  • Setup SVN repository subfolder specific write permission

    - by Hai Lang
    I need to setup a SVN repository which the devgroup should have full privilege to read and write except for two sub folders /1 and /2. For /1 and /2, four users should have write permission and all other users should only have read permission. I put the following into the configuration file, but people in devgroup still have write permission in /1 and /2. Any help would be highly appreciated. [project:/] @devgroup = rw [project:/1] @devgroup = r user1 = rw user2 = rw user3 = rw user4 = rw [project:/2] @devgroup = r user1 = rw user2 = rw user3 = rw user4 = rw

    Read the article

  • Apache - Same username in several .htpasswd files

    - by greydet
    In a virtual host, I setup two different <Location> blocks for which the access is restricted by two basic authentication htpasswd files. One htpasswd contains different usernames + a common user name. The other htpasswd file only contains the common user name. My problem is that once users connect a location with the common user name, they have immediate access to the other location without being asker for a different user name. Is there a way to restrict the username access only to the corresponding htpasswd file? Is there a way for users to ask to be re-prompted for another username/password?

    Read the article

  • How to configure auto-logon in Active Directory

    - by Jonas Stensved
    I need to improve our account management (using Active Directory) for a customer support site with 50+ computers. The default "AD"-way is to give each user their own account. This adds up with a lot of administration with adding/disabling/enabling user accounts. To avoid this supervisors have started to use shared "general" accounts like domain\callcenter2 etc and I don't like the idea of everyone knowing and sharing accounts and passwords. Our ideal solution would be to create a group with computers which requires no login by the user. I.e. the users just have to start the computer. Should I configure auto-logon with a single user account like domain\agentAccount? Is there anything else to consider if I use the same account for all users? How do I configure the actual auto-logon with a GPO on the group? Is there a "Microsoft way" without 3rd party plugins? Or is there a better solution?

    Read the article

  • How do I publish a Power Point Presentation that is High Quality and no lag on the Web?

    - by Luke Hutton
    I have a ~22MB Power Point Presentation (2007) that I need to be presented on a website for viewing. The file contains audio over several slides and some embedded images. What is the best practices or best way to present the presentation so it gets delivered the quickest and best quality to users? Some ideas I've thought of are: Somehow compress the file (.wav audio files, images) into a smaller presentation and save it as a Power Point Show (pps) so users can download it and use the free Power Point viewer? Convert it to video format (.avi) or something and stream it off the web? (Hopefully freeware) Save it as a web page? (but then it's only viewable in IE I believe)

    Read the article

  • IIS 7 throws 401 responses on application whose physical directory has been shared

    - by tonyellard
    I have an IIS 7, Windows Server 2008 R2 box with a relatively fresh install. I've deployed a .NET 2.0 application using windows autentication to the server, and from the default website, added it as an application. I updated the IIS authentication to enable Windows Authentication. When I went to share out the physical directory for the application so that a developer could deploy updates, the users began to receive 401 errors. I can reliably recreate the issue by sharing out the directory of any newly created application. The IIS user has the necessary read/write access to the directory. What do I need to do to keep web users from receiving 401's while at the same time allowing this developer to have access to the physical directory for deployments? Thanks in advance!

    Read the article

  • Ubuntu 12.04 desktop, VNC viewer not refreshing screen

    - by user73279
    I've had this issues across multiple machines and multiple versions of Ubuntu desktop (all 10.04 or later). Usually it happens with an old laptop I've put Ubuntu on but now it's happening on my primary dev machine (a quad-core PC recently upgraded to Ubuntu 12.04 desktop). The problem is this - I can connect to the machine and login with the password, the initial screen looks fine but never refreshes. I can see the monitor for the machine across the room and can see the mouse move and the menus pop up but the image of the screen on the PC in front me running the VNC viewer never updates. So the mouse and keyboard commands are working. Ubuntu 12.04 Desktop Ultra VNC Viewer (also seen with RealVNC's free VNC viewer) Desktop Sharing Static IP on eth0; Dynamic ID on eth1 I think it is an Ubuntu config issue because this PC used to work just fine with 9.04, 10.04, and 11.10 (over the past couple of years). I've also had a couple of laptops that used to have this issue with older Ubuntu's but don't with 12.04. Additional info: The Win7 PC I'm trying to use to control the Ubuntu PC is connected via 2 DLink 8-port gigabit routers. The Ubuntu laptop I usually control via VNC is typically only connected to the network via wireless. The screen refresh is choppy but usable. I've repeated the issue on a Win7 laptop which was connected via ethernet and wireless.

    Read the article

  • Files not accessible

    - by gokul
    My system is running on a pc with C:\ Drive out of space. So I tried to delete some file and clean up to get more space. I found that the %Temp% {C:\Users\Username\AppData\Local\Temp} takes lots of space and tried to delete files in it. But when I open it , it alerted me with the message C:\Users\Username\AppData\Local\Temp is not accessible The file or directory is corrupted and unreadable? What to do? Is deleting files from Temp harmful to computer?

    Read the article

  • Preventing SSH RSA host key warnings for change of key vs IP address

    - by Adam M-W
    I have a network with DHCP enabled, and also a computer that dual boots operating systems and has different SSH keys on each (and yes, I would like to keep different keys on each rather than copying the same identity/private key to each). Because the IP address does not change between operating systems because the MAC address is the same, when connecting to ssh, even when not using the IP address but the hostname via DNS/mDNS, I get the warning: Warning: the RSA host key for 'hostname' differs from the key for the IP address '192.168.1.172' Offending key for IP in /Users/user/.ssh/known_hosts:37 Matching host key in /Users/user/.ssh/known_hosts:38 Are you sure you want to continue connecting (yes/no)? How can I surpress the warning when the hostname differs from the IP address for that hostname, but retain the ability to check host keys are the same for each hostname? (each OS has a unique hostname)

    Read the article

  • Exalogic Elastic Cloud Software (EECS) version 2.0.1 available

    - by JuergenKress
    We are pleased to announce that as of today (May 14, 2012) the Exalogic Elastic Cloud Software (EECS) version 2.0.1 has been made Generally Available. This release is the culmination of over two and a half years of engineering effort from an extended team spanning 18 product development organizations on three continents, and is the most powerful, sophisticated and comprehensive Exalogic Elastic Cloud Software release to date. With this new EECS release, Exalogic customers now have an ideal platform for not only high-performance and mission critical applications, but for standardization and consolidation of virtually all Oracle Fusion Middleware, Fusion Applications, Application Unlimited and Oracle GBU Applications. With the release of EECS 2.0.1, Exalogic is now capable of hosting multiple concurrent tenants, business applications and middleware deployments with fine-grained resource management, enterprise-grade security, unmatched manageability and extreme performance in a fully virtualized environment. The Exalogic Elastic Cloud Software 2.0.1 release brings important new technologies to the Exalogic platform: Exalogic is now capable of hosting multiple concurrent tenants, business applications and middleware deployments with fine-grained resource management, enterprise-grade security, unmatched manageabi! lity and extreme performance in a fully virtualized environment. Support for extremely high-performance x86 server virtualization via a highly optimized version of Oracle VM 3.x. A rich, fully integrated Infrastructure-as-a-Service management system called Exalogic Control which provides graphical, command line and Java interfaces that allows Cloud Users, or external systems, to create and manage users, virtual servers, virtual storage and virtual network resources. Webcast Series: Rethink Your Business Application Deployment Strategy Redefining the CRM and E-Commerce Experience with Oracle Exalogic, 7-Jun@10am PT & On-Demand: ‘The Road to a Cloud-Enabled, Infinitely Elastic Application Infrastructure’ (featuring Gartner Analysts). WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: ExaLogic Elastic Cloud,ExaLogic,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress,ExaLogic 2.0.1

    Read the article

  • Designing business objects, and gui actions

    - by fozz
    Developing a product ordering system using Java SE 6. The previous implementations used combo boxes, text fields, and check boxes. Preforming validation on action events from the GUI. The validation includes limiting existing combo boxes items, or even availability. The issue in the old system was that the action was received and all rules were applied to the entire business object. This resulted in a huge event change as options were changed multiple times. To be honest I have no idea how an infinite loop wasn't produced. Through the next iteration I stepped in and attempted to limit the chaos by controlling the order in which the selections could be made. Making configuration of BO's a top down approach. I implemented custom box models, action events, beans/binding, and an MVC pattern. However I still am unable to fully isolate action even chains. I'm thinking that I've approached the whole concept backwards in an attempt to stay closest to what was already in place. So the question becomes what do I design instead? I'm currently considering an implementation of Interfaces, Beans, Property Change Listeners to manage the back and forth. Other thoughts were validation exceptions, dynamic proxies.... I'm sure there are a ton of different ways. To say that one way is right is crazy, and I'm sure it will take a blending of multiple patterns. My knowledge of swing/awt validation is limited, previously I did backend logic only. Other considerations are were some sort of binding(jgoodies or otherwise) to directly bind GUI state to BO's.

    Read the article

  • Storing data offline with javascript

    - by Walker
    My question is about storing data offline and potentially whether I will need to bring in an outside programmer or could this be learned within a few weeks? The website I am working on will have an interface where users will login and go through a series of quizzes in the form of checkbox, drop down menus, and others. Each page/quiz area could have 20-100 total checkboxes in a series of 3-5 rows because of the comprehensive nature of course. This I can do - I know how to code the quiz and return a correct or incorrect answer based on each individual checkbox and present a cumulative score (ie: you got 57% correct). The issue lies in the fact that I would like to save the users results and keep them informed of their progress. When they complete all of the quizzes, I would like to have a visual output of their performance in each area. Storing the output from their results offline is where I think I may run into a problem with my lack of coding experience. I would also like to have a sidebar with their progress of each section (10-15) with a green percentage completion bar or a % correct which would draw from this. I have never had to code something that stores information like this offline - so back to my question - would it be better to learn the language needed or bring in a coder/developer for the back end stuff.

    Read the article

  • Limit number of simultaneous connections squid makes to a single server

    - by Ben Voigt
    Note: I am asking about outbound concurrent connection limits, not inbound, which is sufficiently covered on existing questions Modern browsers typically open a large number of simultaneous connections, to take advantage of the fact that TCP fairly shares bandwidth between connections. Of course, this doesn't result in fair sharing between users, so some servers have started penalizing hosts which open too many connections. This limit can be configured client-side (e.g. IE MaxConnectionsPerServer, Firefox network.http.max-connections-per-server), but the method differs for each browser and version, and many users aren't competent to adjust it themselves. So we turn to a squid transparent HTTP proxy for central management of HTTP download. How can the number of simultaneous connections from squid to a remote webserver be limited, so the webserver doesn't perceive it as abuse of concurrent connections? Ideally the limit would be per source address. Squid should accept virtually unlimited concurrent requests from the client browser, and issue them sequentially to the remote server, only N at a time, delaying (but not dropping) the others.

    Read the article

  • Map the 'Domain Admins' group into the local Ubuntu 'admin' group

    - by Miquella
    I have configured an Ubuntu 10.04 box to connect to our domain (Windows 2003 R2) using Likewise-Open. All the users can authenticate as expected. However, the domain administrators do not have administrative privileges to the machine. After working at this for a few hours, I've determined what I think may be a solution: if I map the 'Domain Admins' group from the Active Directory into the local 'admin' group, the users should get the appropriate permissions. But I have no idea how to do that. Does this even sound like the correct approach? A similar question was asked on StackOverflow and then migrated here. But it was never answered as it was recommended to be asked here instead. Thanks in advance!

    Read the article

  • How much does the geographical location of DNS servers matter?

    - by GreatFire
    We have started to run our own DNS servers located in Asia since that's where our main audience is. However, it seems that some users in the US are having difficulties accessing our website sometimes. I've noticed myself that DNS lookups of our domain from the US are relatively slow (500 msec+). Maybe the problems some users are having are due to other DNS configuration errors, but in general, how much of an issue is the geographical location of DNS servers? Should we have an additional server in the US?

    Read the article

  • How to handle fine grained field-based ACL permissions in a RESTful service?

    - by Jason McClellan
    I've been trying to design a RESTful API and have had most of my questions answered, but there is one aspect of permissions that I'm struggling with. Different roles may have different permissions and different representations of a resource. For example, an Admin or the user himself may see more fields in his own User representation vs another less-privileged user. This is achieved simply by changing the representation on the backend, ie: deciding whether or not to include those fields. Additionally, some actions may be taken on a resource by some users and not by others. This is achieved by deciding whether or not to include those action items as links, eg: edit and delete links. A user who does not have edit permissions will not have an edit link. That covers nearly all of my permission use cases, but there is one that I've not quite figured out. There are some scenarios whereby for a given representation of an object, all fields are visible for two or more roles, but only a subset of those roles my edit certain fields. An example: { "person": { "id": 1, "name": "Bob", "age": 25, "occupation": "software developer", "phone": "555-555-5555", "description": "Could use some sunlight.." } } Given 3 users: an Admin, a regular User, and Bob himself (also a regular User), I need to be able to convey to the front end that: Admins may edit all fields, Bob himself may edit all fields, but a regular User, while they can view all fields, can only edit the description field. I certainly don't want the client to have to make the determination (or even, for that matter, to have any notion of the roles involved) but I do need a way for the backend to convey to the client which fields are editable. I can't simply use a combination of representation (the fields returned for viewing) and links (whether or not an edit link is availble) in this scenario since it's more finely grained. Has anyone solved this elegantly without adding the logic directly to the client?

    Read the article

  • mRemote and RoyalTS both are not able to RDP connect me to a Windows Server 2008 system?

    - by djangofan
    I am completely stuck on this one. If I start a RDP session independently of these 2 programs everything works fine. My RDP session connects, I click "OK" to accept the "Notice To Users" security message and then it shows me the login screen where I enter my password manually. Now, if I try to use either mRemote or RoyalTS to create this connection, I get the same behavior except that I get a "The user name or password is incorrect." message. Now, I know this cannot be true since I can manually connect with RDP. So, what is the problem with these 2 pieces of software that prevents me from logging in? I have no problems with connecting to Windows XP systems with these programs. Additionally, I wish I knew how to get one of these programs to automatically click the "OK" button on the "Notice To Users" message while automatically attempting to log me in as part of the login process. Can they do that?

    Read the article

  • How can I transfer a logged in user's login data from one server to another?

    - by Martin
    I have one server "A" where users can login. Login is verified by an LDAP server "L". I have a different server "B" were users can log in, too. Login is verified by the same LDAP server as before. Both servers are standard web servers with PHP. My goal is: If a user is logged in to server "A", and if he clicks a link to log in to server "B", the user should automatically be logged in without re-entering username and password. What is a good and secure way to achieve this? I can't submit username and crypted password to server "B". I can't use the PHP session of server "A", because it does not exit on "B". Cookies won't work either. I think that there is a way, but I just can't see it. Any help is very much appreciated.

    Read the article

  • icacls, Network Service, and setting ACLs on Windows Server 2008

    - by Ted
    Setting ACLs on Windows Server 2008 via the command line is giving me some problems. As per http://web2.minasi.com/forum/topic.asp?TOPIC%5FID=26907 I've tried all sorts of variations: C:\Windows\system32icacls "D:\Websites\site.com\Web\bin*" /grant 'NT A uthority\NETWORK SERVICE: (OI) (CI)M' C:\Windows\system32icacls "D:\Websites\site.com\Web\bin*" /grant "NETWORK SERVICE": (OI) (CI)M And all variations in between. However, each try leads to i.e. "Invalid parameter "'NETWORK'"" depending on the variation above. As per http://technet.microsoft.com/en-us/library/cc753525%28WS.10%29.aspx (see in comments), it appears that others have experienced the same issue where the same command works on Windows 7/Vista/etc., but not on Windows Server 2008. What's the best way to apply permissions to Network Service account on a directory and/or files via the command line in Windows Server 2008? Especially as there's no way to do multiple file permissions at once via the GUI (see http://serverfault.com/questions/30991/windows-server-2008-change-security-settings-for-multiple-files-at-once).

    Read the article

  • VisualSVN Server won't work with AD, will with local accounts

    - by frustrato
    Decided recently to switch VisualSVN from local users to AD users, so we could easily add other employees. I added myself, gave Read/Write privileges across the whole repo, and then tried to log in. Whether I'm using tortoisesvn or the web client, I get a 403 Forbidden error: You don't have permission to access /svn/main/ on this server. I Googled a bit, but only found mention of phantom groups in the authz file. I don't have any of those. Any ideas? It works just fine with local accounts. EDIT: Don't know why I didn't try this earlier, but adding the domain before the username makes it work, ie MAIN/Bob. This normally only works when there are conflicting usernames...one local, one in AD, but for whatever reason it works here too. Kinda silly, but I can live with it.

    Read the article

  • Notification if SyncToy fails

    - by Joel Coehoorn
    I support a number of laptop users. In the past (before there were many laptops), each user's computer was set up so that their My Documents folder was mapped to a shared folder on the server. This worked very well for desktops, but has several obvious downsides for laptops (no files when you're off-site, etc). I'm exploring several alternatives for laptops to better map the shared drives, and SyncToy seems the best so far. I have a couple trial users set up so that it syncs automatically whenever they log in, along with a desktop icon they can click if they know they'll need something saved before the next login. My problem is that I'm concerned how I, as the maintainer of this system, can spot failures. I don't want my first indication of a problem to come after a user drops their laptop in a lake and it turns out nothing was synced for the last year. Any ideas?

    Read the article

  • How should I architect a personal schedule manager that runs 24/7?

    - by Crawford Comeaux
    I've developed an ADHD management system for myself that's attempting to change multiple habits at once. I know this is counter to conventional wisdom, but I've tried the conventional for years & am now trying it my way. (just wanted to say that to try and prevent it from distracting people from the actual question) Anyway, I'd like to write something to run on a remote server that monitors me, helps me build/avoid certain habits, etc. What this amounts to is a system that: runs 24/7 may have multiple independent tasks to run at once may have tasks that require other tasks to run first lets tasks be scheduled by specific time, recurrence (ie. "run every 5 mins"), or interval (ie. "run from 2pm to 3pm") My first naive attempt at this was just a single PHP script scheduled to run every minute by cron (language was chosen in order to use a certain library, but no longer necessary). The logic behind when to run this or that portion of code got hairy pretty quick. So my question is how should I approach this from here? I'm not tied to any one language, though I'm partial to python/javascript. Thoughts: Could be done as a set of scripts that include a scheduling mechanism with one script per bit of logic...but the idea just feels wrong to me. Building it as a daemon could be helpful, but still unsure what to do about dozens of if-else statements for detecting the current time

    Read the article

< Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >