Search Results

Search found 40159 results on 1607 pages for 'multiple users'.

Page 549/1607 | < Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >

  • Less daunting front end for SQL Server

    - by Martin
    We currently have a few users who have been using Access very succesfully to throw around large amounts of data. We've now got to the point where the data is just too large to be held in Access, as well as wanting to hold it in a single place where multiple users can access it. We have therefore moved the data over to SQL Server. I want to provide a general tool that they can use to view the data on the server and do some simple things like run queries and filters and export the data for offline manipulation. I don't want the support headaches that might come with rolling out SQL Management Studio, and neither do I want to have to create an Access database with links for each current database or ones that are created in the future. Can anyone recommend a simple tool that will connect to a server, list all the databases and allow a user to drill into a table and look at the data. Many thanks.

    Read the article

  • Cloud hosting and single hardware point of failure?

    - by PeterB
    From talking to sales I thought Rackspace Cloud was running on a SAN and compute nodes (as VMWare's offerings do), only to find out it doesn't, so when the host server goes down for maintenance all cloud servers on the server go down (in our case for 2.5 hours). I understand Amazon EC2 also has this single-server point of failure. Which cloud hosting solutions don't rely on a single server? I've yet to find a list by architecture Is there a term that distinguishes between these types of 'cloud'? Is one of these 'grid computing' and the other 'virtualisation'? Can a SAN backed solution provide the same reliability as 2 mirrored cloud servers on (say) Rackspace Cloud? I am more familiar with the VMWare architecture and would like to understand the advantages and disadvantages of each approach. I understand the standard architecture is to have multiple cloud servers and mirrored data between them; until we need multiple database servers I'm wondering if a SAN/node hosting solution would provide the lack of downtime we need without the added complexity.

    Read the article

  • WiFi problems on several Ubuntu installations

    - by Rickyfresh
    Okay this is the first time I have ever had to ask a question as usually the Ubuntu community have answered everything already but on this occasion there are many people asking for the answer but not one good solution has become available so far so someone please help or I will have to install Windows on my sons and my girlfriends PCs and that would be a disaster as I am trying to help convince people to move from Windows. I installed 12.04 on three computers on the same day. Dell Inspiron (Works Perfect) Toshiba Satellite Home built Desktop The Dell works perfect but the other two either keep losing connection to the wireless Internet and even when they are connected they stop connecting to web sites, for some reason it searches Google fine but will not connect to web sites when a link is clicked. So far people have recommended in other forums: Removing network manager and installing wicd (didn't solve it) Changing the MTU in the wireless settings (didn't solve it) All sorts of messing about with Firefox settings (this doesn't solve it and even if it did this would leave most average PC users scratching their heads and wishing they had stuck to windows) The problem exists on two very different machines and different wireless cards so I doubt its a driver or hardware issue, also many other Ubuntu users are having the same problem with a vast array of different machines and wireless cards. Can someone please give a good solution to this as its going to turn a lot of people away from Ubuntu if they cannot get this sorted. I would give some PC specs but the two machines are vastly different and the other people complaining of this problem also have very different systems all showing the same problem.

    Read the article

  • noexec option enabled in fstab is not getting applicable for limited user. Is it a bug?

    - by user170918
    noexec option enabled in fstab is not getting applicable for limited user. Is it a bug? cat /etc/fstab # / was on /dev/sda2 during installation UUID=fd7e2645-3cc4-4c6c-8b1b-016711c2fd07 / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation UUID=f3e58f86-8999-4678-a5ec-0a4b621c6e37 /boot ext4 defaults 0 2 # /home was on /dev/sda9 during installation UUID=bcbc1c4d-46a9-4b2a-bb0a-6fe1bdeaed22 /home ext4 defaults,nodev,nosuid 0 2 # /tmp was on /dev/sda5 during installation UUID=8538eecc-bd16-40fe-ad66-7d7b9287839e /tmp ext4 defaults,noexec,nosuid,nodev 0 2 # /var was on /dev/sda6 during installation UUID=292696cf-fc15-40ab-9cd8-cee9bff7e165 /var ext4 defaults,nosuid,nodev 0 2 # /var/log was on /dev/sda7 during installation UUID=fab1f85b-ae09-4ce0-b169-c01205eb8f9c /var/log ext4 defaults,noexec,nosuid,nodev 0 2 # /var/log/audit was on /dev/sda8 during installation UUID=602f5003-4ac0-49e9-99d3-b29378ce9430 /var/log/audit ext4 defaults,noexec,nosuid,nodev 0 2 # swap was on /dev/sda3 during installation UUID=a538d35b-b2e9-47f2-b72d-5dbbcf0afca0 none swap sw 0 0 /dev/sdb1 /mnt/usblpsc auto noauto,user,rw,noexec,nosuid,nodev 0 0 /dev/sdc1 /mnt/usblpsc auto noauto,user,rw,noexec,nosuid,nodev 0 0 /dev/sdd1 /mnt/usblpsc auto noauto,user,rw,noexec,nosuid,nodev 0 0 sudo users are not able to paste executable files in /bin into the file system which have the noexec option set. But limited users are able to paste the same files into the file system which have noexec option set. Why is it so?

    Read the article

  • How to persuade C fanatics to work on my C++ open source project?

    - by paperjam
    I am launching an open-source project into a space where a lot of the development is still done Linux-kernel-style, i.e. C-language with a low-level mindset. There are multiple benefits to C++ in our space but I fear those used to working in C will be scared off. How can I make the case for the benefits of C++? Specifically, the following C++ attributes are very valuable: concept of objects and reference-counting pointers - really don't want to have to malloc(sizeof(X)) or memcpy() structs templates for specialising whole bodies of code with specific performance optimizations and for avoiding duplication of code. template metaprogramming related to the above syntactic sweetness available (e.g. operator overloading, to be used in very small doses) STL Boost libraries Many of the knee-jerk negative reactions to C++ are illfounded. Performance does not suffer: modern compilers can flatten dozens of call stack levels and avoid bloat through wide use of template specializations. Granted, when using metaprogramming and building multiple specializations of a large call tree, compile time is slower but there are ways to mitigate this. How can I sell C++?

    Read the article

  • Automatic switching between monitor configurations

    - by Michael Aquilina
    I have a laptop and an external monitor and i was wondering if there was a simple approach to switching between multiple monitor configurations based on the detected available displays. For example: When i am at home and i plug in my external monitor i would like this to automatically become enabled and the laptop screen to become disabled. As soon as i pull out the display cable for the external monitor, i would like the laptop screen to automatically become enabled. I was expecting this to just "work" just like it does in windows - but it seems to be much harder than that. I am aware of the xrandr command to turn displays on and off but i cannot seem to find a way to get this to work the way i describe above. I had also found this post about switching between multiple monitor configurations and the results seem a bit inconclusive. However i was hoping that with xrandr there would be a simpler solution. For me, the fact that when i pull out my external monitor the screen just goes black and i get an error message is a big issue holding me back from making the complete switch to linux as i move around alot as a student. My OS of choice is currently Kubuntu 12.04 but i am willing to change to something else if it provides a better way of setting up the described setup.

    Read the article

  • Introducing RedPatch

    - by timhill
    The Ksplice team is happy to announce the public availability of one of our git repositories, RedPatch. RedPatch contains the source for all of the changes Red Hat makes to their kernel, one commit per fix and we've published it on oss.oracle.com/git. With RedPatch, you can access the broken-out patches using git, browse them online via gitweb, and freely redistribute the source under the terms of the GPL. This is the same policy we provide for Oracle Linux and the Unbreakable Enterprise Kernel (UEK). Users can freely access the source, view the commit logs and easily identify the changes that are relevant to their environments. To understand why we've created this project we'll need a little history. In early 2011, Red Hat changed how they released their kernel source, going from a tarball that had individual patch files to shipping the kernel source as one giant tarball with a single patch for all Red Hat-introduced changes. For most people who work in the kernel this is merely an inconvenience; driver developers and other out-of-kernel module developers can see the end result to make sure their module still performs as expected. For Ksplice, we build individual updates for each change and rely on source patches that are broken-out, not a giant tarball. Otherwise, we wouldn’t be able to take the right patches to create individual updates for each fix, and to skip over the noise — like a change that speeds up bootup — which is unnecessary for an already-running system. We’ve been taking the monolithic Red Hat patch tarball and breaking it into smaller commits internally ever since they introduced this change. At Oracle, we feel everyone in the Linux community can benefit from the work we already do to get our jobs done, so now we’re sharing these broken-out patches publicly. In addition to RedPatch, the complete source code for Oracle Linux and the Oracle Unbreakable Enterprise Kernel (UEK) is available from both ULN and our public yum server, including all security errata. Check out RedPatch and subscribe to [email protected] for discussion about the project. Also, drop us a line and let us know how you're using RedPatch!

    Read the article

  • How do I know if 'hg clone' is doing the work remotely?

    - by jjfine
    I've got a very simple windows install of Mercurial on my machine. The 'central' repository is located at //mymachine/hg-repos/central. I want remote (VPN) users to be able to create clones of this repository in the hg-repos directory because it gets daily backups. I have given these users full control of the hg-repos directory. My question is this: If I'm on a remote machine, and I run the command: hg clone //mymachine/hg-repos/central //mymachine/hg-repos/central-copy ...is the remote machine doing most of the work? I don't want the client to have to download all of the central repository and then upload it all back because people are going to be using this from across the country. But I suspect this is what's happening here since it works so easily.

    Read the article

  • Server 2003 Filter mobile devices via MAC

    - by msindle
    At one of my client's sites I need to keep my users unauthorized devices off the wireless. They all know the SSID and Password because many of them have laptops that need the wireless. I'm running out of IP address's and we have sent out numerous emails asking them to stay off, but like most users they ignore IT's email. I'm currently running Server 2003 as the GC/DC (but have 2008 servers in place) and 2 Netgear WNAP320. I've seen several posts similar to what I'm looking for but they seem to deal with Linux. My question is how do I go about doing this without migrating (scheduled for the end of the year) to a new server and is it possible to do this within Server 2003? Thanks msindle

    Read the article

  • How to restrict deletion of a folder on NTFS share, but still allow modify access within folder

    - by thinkdreams
    I am setting up a set of scan folders from a scanning copier device, and would like to know the best way to protect the folders (for each department) from moving or deletion, but yet still allow access for the users to modify (i.e. create/add/delete) the scanned files within the folder. Structure is: Share Name Departmental Folder User files The writing of the files initially is taken care of by a service account which has full control. We'd just like to ensure the users cannot accidentally delete the folder (which has already happened) containing all the files, etc. This is for a Windows 2003 server, NTFS permissions. Suggestions would be most appreciated.

    Read the article

  • Server 2012r2 VPN DNS

    - by Tyron Gower
    Have an issue where onsite clients cannot resolve VPNusers. but VPN users can resolve onsite machines. example. USER! uses LAPTOP1 USER1 connects to VPN gets internal IP address of 10.243.0.200 USER1 pings SERVER1 - resolve to ip and gets reply USER1 RDP into SERVER1 (inside VPN) USER1 pings LAPTOP1 from SERVER1 resolves to ip address last assigned by DHCP (10.243.0.139) ping fails USER1 pings 10.243.0.200 from SERVER1 gets reply. Running Server 2012r2 It is a domain controller, DNS and VPN server. VPN is just configured with basic default settings. All VPN users have static IP setup in AD. Not sure where to go from here.

    Read the article

  • Automated VLAN creation with residential Wireless devices

    - by Zephyr Pellerin
    We've got a few WRT devices from Linksys here, and the issue has arisen to deploy them in a relatively small environment, However, in the interest of manageability we'd like to be able to automatically VLAN (ideally NOT subnet) every user from one another. It seems obvious to me that the default firmware isn't capable of this - can OpenWRT/Tomato/DD-WRT support any sort of functionality such that new users are automatically VLANed or otherwise logically separated from other users? It seems like there's an easy IPtables or PF solution here, but I've been wrong before. (If that seemed a little ambiguous, heres an example) User 1 sends DHCP request to server, new VLAN (We'll call VLAN 1) is created, user is placed in that VLAN. Then, user 2 sends a DHCP request and is placed in VLAN 2 etc. etc.

    Read the article

  • Win Server 2008 R2 - Mapped shared folder hanging?

    - by M-Tech
    I have recently built a windows 2008 server R2 machine. This is purely for file server purposes and is very much a basic build. All windows updates installed and part of domain. I have setup a shared folder on the C:Drive and added permissions for domain users as co-owners. The client machines run XP SP3 and are part of the domain also. We have a few servers running the same setups on a few of our sites but this one is particular crashes users machines (explorer.exe hangs for at least a few mins) when attempting to access the shared folder. I have turned off the option on the network card for power save aswell still no change. Any help with this is very much appreciated and i look forward to hearing from you ;)

    Read the article

  • Mysql Encryption and Key managment

    - by microchasm
    I am developing a local intranet system in PHP/MySQL to manage our client data. It seems that the best practice would be to encrypt the sensitive data on the MYSQL server as it is being entered. I am not clear, though, on what would be the best way to do this while still having the data readily accessible. It seems like a tough question to answer: where is the key(s) stored? How to best protect the key? If the key is stored on each users' machine, how to protect it if the machine is exploited? If the key is exploited, how to change the key? If the key is to be stored in the db, how to protect it there? How would users access it? If anyone could point me in the right direction, or give some tips I'd be very grateful. Thanks.

    Read the article

  • Choosing hardware for Flash Media Server

    - by minaev
    Having read the answers in this discussion, I still would like to come up with the same question: What should I buy to run Flash Media Interactive Server 3.5? I just have slightly different boundary conditions. We plan to serve video to ca. 1,000 users simultaneously. It will be live stream, so the server will receive the stream in HD (1280x720), cache it, reformat to various other resolutions and send it to users. OS of choice is Linux, but if you say it should MS-DOS, so it will be... What would be a decent server for this task?

    Read the article

  • Outlook 2010 + Move IMAP PST file = Outlook data file cannot be accessed

    - by GWB
    I set up a new IMAP account in Outlook 2010. It works but creates IMAP PST file in C:\Users\User\AppData\Local\Microsoft\Outlook. I want the file on my data drive in D:\Users\User\Documents\Outlook Files (the same folder where outlook automatically creates the local Outlook PST. I followed the instructions here to move the IMAP PST. Testing the account (send/receive) works fine, but if I try to manually send an email I get error 0x8004010F Outlook data file cannot be accessed. I've tried repairing the PST using SCANPST (it always finds errors), and deleting and recreating the account but I get the same error. If I move the PST file back, it works again, but this is not ideal. Note: I don't think this is a duplicate of this question as the cause is different and the solution does not help.

    Read the article

  • What TLDs should I use for my NS records for redundancy? (DNSSEC support required)

    - by makerofthings7
    Question As a general practice, is it a good idea to use multiple TLDs for the name servers? How should I choose between which TLD would be a good candidate for being the root server for my NS name? More Info I am switching over 800 DNS zones to an outsourced DNS provider. I originally planned on setting the zone names to nsX.company.com, but think it would be best to have multiple TLDs such as .net , .org and .info Since I plan on supporting DNSSec at company.com I think all the 1st tier Name servers must support it as well. Part of the inspiration for this question came from our provider UltraDNS. In their configuration screen for our domains, they actively verify and alert us if our name servers aren't exactly: pdns1.ultradns.net pdns2.ultradns.net pdns3.ultradns.org pdns4.ultradns.org pdns5.ultradna.info pdns6.ultradns.co.uk

    Read the article

  • Artists and music - Need Help Deciding on a CMS

    - by infty
    A friend has asked me to build a site with the following options: staff members must be able to add new music and artists to the page a gallery must be provided - it is also good if each artist has the ability to have his/her own smaller gallery users must be able to vote for artists users must be able to alter in discussions (forums or comments sections) staff members must be able to blog staff members must be able to write articles I did a small project where i actually implemented all of these features, but I want to use an existing content management system for all of these features so that future developers can, hopefully, more easy extend the website. And also, so that I don't have to provide too much documentation. I have never developed a website using an external CMS like Drupal or Wordpress and after seeing hours of tutorial videos of both systems, I still can't make up my mind on whether i should : a) use Drupal 7 b) use Wordpress 3 c) create my own cms I can imagine that staff members would also want to create content using iPhone or android based mobile devices, but this is not a required feature. Can someone, with experience, please tell me about their experiences with larger projects like this? The site will have approximately 400 000 - 500 000 visitors (not daily visitors, based on numbers from last year in a period of 4 months)

    Read the article

  • cookie not being sent when requesting JS

    - by Mala
    I host a webservice, and provide my members with a Javascript bookmarklet, which loads a JS sript from my server. However, clients must be logged in, in order to receive the JS script. This works for almost everybody. However, some users on setups (i.e. browser/OS) that are known to work for other people have the following problem: when they request the script via the javascript bookmarklet from my server, their cookie from my server does not get included with the request, and as such they are always "not authenticated". I'm making the request in the following way: var myScript = eltCreate('script'); myScript.setAttribute('src','http://myserver.com/script'); document.body.appendChild(myScript); In a fit of confused desperation, I changed the script page to simply output "My cookie has [x] elements" where [x] is count($_COOKIE). If this extremely small subset of users requests the script via the normal method, the message reads "My cookie has 0 elements". When they access the URL directly in their browser, the message reads "My cookie has 7 elements". What on earth could be going on?!

    Read the article

  • Shared Exchange Calendar View

    - by Mark A Johnson
    My department creates a shared calendar for everyone to enter their out-of-office times. This requires duplicate entry for those of us who keep all our info in our own Exchange calendar. Is there a way in Exchange to create a View that's simply a combination view of multiple users' calendars? For example, we would create a view with all the departments' users calendars combined, but only those marked "out of office". Ideally, the subject line would also include the user's name, but this would not be 100% necessary.

    Read the article

  • Windows 7: How to stop/start service from commandline (like services.msc does it)?

    - by john
    I have developed a program in Java that uses on a local SQL Server instance to store its data. On some installations the SQL Server instance is not running sometimes. Users can fix this problem by manually starting the SQL Server instance (via services.msc). I am thinking about automating this task: the software would check if the database server is reachable, if not try to (re)start it. The problem is that on the same user account the Services can be stopped /started via services.msc (without any UAC prompt), but not via (non-elevated) command line. The operating system seems to treat services.msc differently: c:\>sc start mssql$db1 [SC] StartService: OpenService FEHLER 5: Zugriff verweigert (Access denied) c:\>net start mssql$db1 Systemfehler 5 aufgetreten. Zugriff verweigert (Access denied) So the question is: how can I stop/start the service from a java-program/command line without having my users to use services.msc (preferrably via on-board-tools)

    Read the article

  • one share include more shares in diffrent premission

    - by saber
    hi all ubuntu 8.04 \ samba I want at the opening share \my_host there was the directory in which will be catalogs with different rights (eg the user with the IP is allowed to write only in one directory) example \\my_host\folder --\folder1 -user_ip1 can write to folder --\folder2 -user_ip2 .... --\folder3 my smb.conf [filials] path = /var/filials comment = No comment ;admin users = nobody ;directory mask = 755 ;read only = no available = yes browseable = yes writable = yes guest ok = yes public = yes printable = no share modes = yes ;locking = yes [filials\user1] path = /var/filials/user1 comment = No comment ;admin users = nobody ;directory mask = 755 ;read only = no available = yes browseable = yes writable = yes guest ok = yes public = yes printable = no share modes = yes ;locking = yes what is write [filials\user1] so user1 was in the catalog filials

    Read the article

  • Graphing per-user CPU usage on a Linux machine

    - by mart1n
    I want to graph (graphical output would be great, i.e. a .png file) the following situation: I have users A, B, and C. I limit their resources so that when all users run a CPU intensive task at the same time, those processes will use 25%, 25%, and 50% of CPU. I know I can get the real-time stats using top but have no idea what to do with them. I've searched through the huge top man page but haven't found much on the subject of outputting data that can be graphed. Ideally, the graph would show a span of maybe 30 seconds. Any ideas how to achieve this?

    Read the article

  • MS Access 2007 end user access

    - by LtDan
    I need some good advise. I have used Access for many years and I use Sharepoint but never the two combined. My newly created Access db needs to be shared with many users across the organization. The back end is SQL and the old way to distribute the database would be placing the db on a shared drive, connecting their PC ODBC connections to the SQL db and then they would open the database and have at it. This has become the OLD way. What is the best (and simpliest) way to allow the end users to utilize a frontend for data entry/edit reporting etc. Can I create a link through SharePoint and the user just open it from there. Your good advise is greatly approciated.

    Read the article

  • WSS 3.0 fails to hide quick launch items for which the current user does not have access

    - by Nils
    I'm running a Small Business Server 2008 with Windows Sharepoint Services 3.0 (WSS 3.0). I thought WSS was supposed to hide menu items for which the current logged in user don't have access? Apparently, all users can see all links, regardless of whether they have access. This applies to both links to newly created sub-sites as well as document libraries/lists. Is this expected behaviour, or is there a misconfiguration somewhere that causes the links to stay visible even for users without access? Thanks!

    Read the article

< Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >