Search Results

Search found 10530 results on 422 pages for 'remote administration'.

Page 201/422 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • MySQL Enterprise Monitor 3.0.11 has been released

    - by Andy Bang
    We are pleased to announce that MySQL Enterprise Monitor 3.0.11 is now available for download on the My Oracle Support (MOS) web site. It will also be available via the Oracle Software Delivery Cloud in about 1 week. This is a maintenance release that includes a few new features and fixes a number of bugs. You can find more information on the contents of this release in the change log. You will find binaries for the new release on My Oracle Support. Choose the "Patches & Updates" tab, and then choose the "Product or Family (Advanced Search)" side tab in the "Patch Search" portlet. You will also find the binaries on the Oracle Software Delivery Cloud in approximately 1 week. Choose "MySQL Database" as the Product Pack and you will find the Enterprise Monitor along with other MySQL products. Based on feedback from our customers, MySQL Enterprise Monitor (MEM) 3.0 offers many significant improvements over previous releases. Highlights include: Policy-based automatic scheduling of rules and event handling (including email notifications) make administration of scale-out easier and automatic Enhancements such as automatic discovery of MySQL instances, centralized agent configuration and multi-instance monitoring further improve ease of configuration and management The new cloud and virtualization-friendly, "agent-less" design allows remote monitoring of MySQL databases without the need for any remote agents Trends, projections and forecasting - Graphs and Event handlers inform you in advance of impending file system capacity problems Zero Configuration Query Analyzer - Works "out of the box" with MySQL 5.6 Performance_Schema (supported by 5.6.14 or later) False positives from flapping or spikes are avoided using exponential moving averages and other statistical techniques Advisors can analyze data across an entire group; for example, the Replication Configuration Advisor can scan an entire topology to find common configuration errors like duplicate server UUIDs or a slave whose version is less than its master's More information on the contents of this release is available here: What's new in MySQL Enterprise Monitor 3.0? MySQL Enterprise Edition: Demos MySQL Enterprise Monitor Frequently Asked Questions MySQL Enterprise Monitor Change History More information on MySQL Enterprise and the Enterprise Monitor can be found here: http://www.mysql.com/products/enterprise/ http://www.mysql.com/products/enterprise/monitor.html http://www.mysql.com/products/enterprise/query.html http://forums.mysql.com/list.php?142 If you are not a MySQL Enterprise customer and want to try the Monitor and Query Analyzer using our 30-day free customer trial, go to http://www.mysql.com/trials, or contact Sales at http://www.mysql.com/about/contact. If you haven't looked at MEM recently, and especially MEM 3.0, please do so now and let us know what you think. Thanks and Happy Monitoring! - The MySQL Enterprise Tools Development Team

    Read the article

  • NOVANTE recherche un DBA SQL Server Production à 45 - 65 KEuro par an

    NOVANTE recherche un DBA SQL Server Production Pour 45 - 65 K€ par an NOVANTE est une société basée à Houilles dans le 78. Dans le cadre de son développement, elle recherche un DBA SQL Server Production (45 - 65 K€ par an). Profil recherché :DBA SQL Server de production avec au moins 2 années d'expérience. Vous êtes consciencieux et méthodique. Vous savez utiliser les commandes en lignes pour administrer SQL Server quand il le faut. Vous savez automatiser les tâches en scriptant des batchs d'administration. Si vous êtes confirmé, vous savez aussi analyser les performances et préconiser des solutions. Vous n'avez pas d'à priori ...

    Read the article

  • I am the Webmaster now. Where do I start? [closed]

    - by John C
    I just changed jobs and will soon be in charge of a custom-built ASP.NET CMS and website for a fairly large corporation with global offices. I have IT and developer FTE resources available to me but I am trying to build a list of branding, project, and functionality points to review. What guides or lists can/should I use to evaluate this website before I begin adding features, creating new projects, or even redesigning and redeveloping the site? (I have been a webmaster/designer/developer for small, WordPress/Drupal sites for 10 years. I have been an unofficial webmaster (director/content manager) for a large site for 3 years (no direct development control over Sharepoint administration, IIS, or hosting ... but everything else, I did. Analytics, email, advertising, social, SEO, etc.).) Thank you!

    Read the article

  • New Solaris 11 book available

    - by user12611852
    A new Solaris 11 book is now available.  Congratulations to my colleague in the Oracle Public Sector Hardware sales organization "Dr. Cloud" Harry Foxwell and his co-writers on publishing Oracle Solaris 11 System Administration The Complete Reference Table of contents 1 The Basics of Solaris 11 2 Prepare a System for Solaris3 Installation Options4 Alternative Installations for Enterprise5 The Solaris Graphical Desktop Environment6 The Service Management Facility7 Solaris Package Management "Image Packaging System"8 Solaris at the Command Line9 File systems and ZFS10 Customize the Solaris Shells11 Users and Groups HF12 Solaris 11 Security13 Basic System Performance Tuning14 Solaris Virtualization15 Print Management16 DNS and DHCP17 Mail Services18 Mgmt of Trusted Extensions19 The Network File System 20 The FTP Server21 Solaris and Samba 22 Apache and the Web Stack Buy one today

    Read the article

  • Can not Load Type - Starting the ASP.NET Web Site Admin Tool

    A beginner emailed me last night and I had forgotten about this little ditty ! When you create a NEW project and immediately try to run the Web Site Administration Tool you will get this error. The solution is easy BUILD FIRST ! I remember being very confused the first time I got this :) ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to fix "Could not open lock file" because "Permission denied"?

    - by user66498
    Whenever trying to install any software and update manger, I get an error stating Package operation failed The installation or removal of a software package failed When I run sudo apt-get update I got this result: conan51xd@conan51xd-Lenovo-B470:~$ sudo apt-get -f install [sudo] password for conan51xd: Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. conan51xd@conan51xd-Lenovo-B470:~$ apt-get update E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied) E: Unable to lock directory /var/lib/apt/lists/ E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied) E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?

    Read the article

  • How do I remove only shopping searches?

    - by Amanda
    I have a brand new 13.10 install and I want all this shopping spyware nonsense gone. Searching for "Ubuntu shopping spyware nonsense" led me to apt-get remove unity-lens-shopping but I don't actually see a unity-lens-shopping package. How do I remove shopping searches in 13.10? Update: Is there any way to distinguish the scopes that search remote servers (Ebay, Amazon, AskUbuntu) from the ones that search my local computer? Or do I have to go through them all?

    Read the article

  • Simple way to create a SQL Server Job Using T-SQL

    Sometimes we have a T-SQL process that we need to run that takes some time to run or we want to run it during idle time on the server. We could create a SQL Agent job manually, but is there any simple way to create a scheduled job? The seven tools in the SQL DBA Bundle support your core SQL Server database administration tasks.Make backups a breeze! Enjoy trouble-free troubleshooting! Make the most of monitoring! Download a free trial now.

    Read the article

  • Shader Model 3.0 not available

    - by Romy
    I have Acer 4745G with ATI HD 5650 1GB VRAM. After I installed the COD Black Ops trough Wine Playonlinux and successfully installed, and I play the game and then appear message "Shader Model 3.0 not available". I have installed ATI driver 11.8 when I trying this game. And then I'm trying to install the driver from menu "System Administration Additional Driver, it doesn't work. I cannot log in, in the screen appear only the text for log in. And then I uninstall it trough recovery mode. And then I download ATI driver 11.11 from official ATI/AMD website, and then I restart my computer and my screen appearing only text again. I'm using Ubuntu (Ultimate Edition) 11.04. How to overcome this problem?

    Read the article

  • How to Stream Videos and Music Over the Network Using VLC

    - by Chris Hoffman
    VLC includes a fairly easy-to-use streaming feature that can stream music and videos over a local network or the Internet. You can tune into the stream using VLC or other media players. Use VLC’s web interface as a remote control to control the stream from elsewhere. Bear in mind that you may not have the bandwidth to stream high-definition videos over the Internet, though. How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8 How to Banish Duplicate Photos with VisiPic

    Read the article

  • How to cleanly add after-the-fact commits from the same feature into git tree

    - by Dennis
    I am one of two developers on a system. I make most of the commits at this time period. My current git workflow is as such: there is master branch only (no develop/release) I make a new branch when I want to do a feature, do lots of commits, and then when I'm done, I merge that branch back into master, and usually push it to remote. ...except, I am usually not done. I often come back to alter one thing or another and every time I think it is done, but it can be 3-4 commits before I am really done and move onto something else. Problem The problem I have now is that .. my feature branch tree is merged and pushed into master and remote master, and then I realize that I am not really done with that feature, as in I have finishing touches I want to add, where finishing touches may be cosmetic only, or may be significant, but they still belong to that one feature I just worked on. What I do now Currently, when I have extra after-the-fact commits like this, I solve this problem by rolling back my merge, and re-merging my feature branch into master with my new commits, and I do that so that git tree looks clean. One clean feature branch branched out of master and merged back into it. I then push --force my changes to origin, since my origin doesn't see much traffic at the moment, so I can almost count that things will be safe, or I can even talk to other dev if I have to coordinate. But I know it is not a good way to do this in general, as it rewrites what others may have already pulled, causing potential issues. And it did happen even with my dev, where git had to do an extra weird merge when our trees diverged. Other ways to solve this which I deem to be not so great Next best way is to just make those extra commits to the master branch directly, be it fast-forward merge, or not. It doesn't make the tree look as pretty as in my current way I'm solving this, but then it's not rewriting history. Yet another way is to wait. Maybe wait 24 hours and not push things to origin. That way I can rewrite things as I see fit. The con of this approach is time wasted waiting, when people may be waiting for a fix now. Yet another way is to make a "new" feature branch every time I realize I need to fix something extra. I may end up with things like feature-branch feature-branch-html-fix, feature-branch-checkbox-fix, and so on, kind of polluting the git tree somewhat. Is there a way to manage what I am trying to do without the drawbacks I described? I'm going for clean-looking history here, but maybe I need to drop this goal, if technically it is not a possibility.

    Read the article

  • Topics for covering in-depth programming knowledge

    - by black_belt
    I pursued my bachelors' degree in business administration, but my interest in Information Technology led me to acquire some knowledge of PHP programming and MySQL database. I find programming so interesting that I haven't applied for any job since my graduation. Currently I am staying home and just trying to acquire in-depth knowledge of PHP programming. So far I have developed couple of websites and web applications including Inventory+ Point of Sale Software and an Accounting system for small organizations. I aim to have knowledge that a Computer Science graduate should have, and for that I want to read books but I have no idea where to start from. Could you please suggest me some books and topics that I should study on? Thanks a lot :)

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • The ETL from Hell - Diagnosing Batch System Performance Issues

    Too often, the batch systems that underlie a lot of database processing just grow without conscious design. When runs start to extend beyond their allotted time, and tuning no longer solves the problem, it is often discovered that batches are run in series, with draconian error handling. It is time to impose some rational design, and Nigel is a seasoned healer of batch processes. The seven tools in the SQL DBA Bundle support your core SQL Server database administration tasks.Make backups a breeze! Enjoy trouble-free troubleshooting! Make the most of monitoring! Download a free trial now.

    Read the article

  • Palm vendu à HP pour 1,2 milliard de dollars, la future tablette du constructeur sera équipée de Web

    Mise à jour du 29.04.2010 par Katleen Palm vendu à HP pour 1,2 milliard de dollars, la future tablette du constructeur sera équipée de WebOS Alors que les spécialistes évoquaient HTC, c'est finalement HP qui va racheter Palm. L'acquisition de la compagnie en faillite lui coûtera la bagatelle de 1.2 milliard de dollars (environ 900 millions d'euros), alors que Palm était coté à 80 milliards de dollars en 2000. La transaction a été validée par les conseils d'administration des deux entreprises, et devrait être achevée fin juillet. HP préparant son entrée sur le marché des tablettes, vient là de s'offrir un atout de choix qui lui permettra de s'affranchir de partenaires comme Micro...

    Read the article

  • Communication Between Different Technologies in a Distributed Application

    - by sjtaheri
    I had to a incorporate several legacy applications and services in a network-distributed application. The existing services and applications are written using different languages and technologies, including: java, C#.Net and C++; all running on MS Windows machines. Now I'm wondering about the communication mechanism between them. What is the simple and standard way? Thanks! PS. communications include simple message sending and remote method invocations.

    Read the article

  • Les entreprises n'investiraient pas assez dans les technologies IT pour préparer leur avenir, d'aprè

    Les responsables IT et les salariés pensent que leur entreprise ne prépare pas assez l'avenir D'après une étude de Google, et vous ? Google (plus précisément Google Enterprise) a confié la réalisation d'une étude sur "L'entreprise du futur" à Future Foundation, un observatoire des tendances. Cette étude porte sur les technologies IT et leur perception dans le milieu professionnel. Les salariés de 140 entreprises dans des secteurs d'activité tels que les services financiers, l'industrie, les agences de publicité ou l'administration publique ont donc été sondés dans cinq pays (France, Royaume-Uni, Allemagne, Etats-Unis et Japon). Il en ressort que les nouvelles tech...

    Read the article

  • Problem after upgrading to 13.10

    - by paul Barnett
    I am a new user to Ubuntu and not a very accomplished user. I have dual partitioned my laptop, recently upgraded to Windows 8.1 and also Ubuntu 13.10.Now when I boot into Ubuntu I get a running line of 3's and squiggles followed by scrolling (with a fail on the line "reload cups, upon starting avahi-daemon to make sure remote queues are populated"). If I press any key, but mostly return, it will boot into the password page. Any ideas? Thanks.

    Read the article

  • TFS Backup Plan Wizard Tool

    - by Enrique Lima
    With the release of the “September – 2010” TFS 2010 Power Tools, came an addition to the Team Foundation Server Administration Console.  This addition is the Team Foundation Backups Tree item.  The tool is used to create backup plans and to work with it you run through a wizard, just like you would in configuring TFS or any of the extensions it has. The areas covered through the tool include: Backup to a Network Backup Path, retention configuration. Under Advanced Options, the extension to be used for the Full and Transactional backups. The capability to include external databases, meaning, include the reporting databases and SharePoint databases as part of the plan. There are further options as you can see, that includes being able to define a task scheduler account, be able to set alerts for notifications on execution of the plans, and last the option to configure the schedule for the plan execution.  All in all a very good tool and great way to safeguard the investment you’ve made.

    Read the article

  • Windows Media Player Vulnerability, PCAnywhere Warning

    Windows Media Player Vulnerability Targeted by Drive-by-download Attack Security firm Trend Micro recently released details on malware that has been targeting the MIDI Remote Code Execution Vulnerability found in Microsoft's Windows Media Player. A post on Trend Micro's Malware Blog offered further insight into the malware that has been exploiting the CVE-2012-0003 vulnerability. The malware's authors have been successful in exploiting the vulnerability by tricking unsuspecting victims into opening a specially engineered MIDI file in Windows Media Player. This Web-based drive-by-download ...

    Read the article

  • Hands-on GlassFish FREE Course covering Deployment, Class Loading, Clustering, etc.

    - by arungupta
    René van Wijk, an Oracle ACE Director and a prolific blogger at middlewaremagic.com has shared contents of a FREE hands-on course on GlassFish. The course provides an introduction to GlassFish internals, JVM tuning, Deployment, Class Loading, Security, Resource Configuration, and Clustering. The self-paced hands-on instructions guide through the process of installing, configuring, deploying, tuning and other aspects of application development and deployment on GlassFish. The complete course material is available here. This course can also be taken as a paid instructor-led course. The attendees will get their own VM and will have plenty of time for Q&A and discussions. Register for this paid course. Oracle Education also offers a similar paid course on Oracle GlassFish Server 3.1: Administration and Deployment.

    Read the article

  • Data Source Security Part 4

    - by Steve Felts
    So far, I have covered Client Identity and Oracle Proxy Session features, with WLS or database credentials.  This article will cover one more feature, Identify-based pooling.  Then, there is one more topic to cover - how these options play with transactions.Identity-based Connection Pooling An identity based pool creates a heterogeneous pool of connections.  This allows applications to use a JDBC connection with a specific DBMS credential by pooling physical connections with different DBMS credentials.  The DBMS credential is based on either the WebLogic user mapped to a database user or the database user directly, based on the “use database credentials” setting as described earlier. Using this feature enabled with “use database credentials” enabled seems to be what is proposed in the JDBC standard, basically a heterogeneous pool with users specified by getConnection(user, password). The allocation of connections is more complex if Enable Identity Based Connection Pooling attribute is enabled on the data source.  When an application requests a database connection, the WebLogic Server instance selects an existing physical connection or creates a new physical connection with requested DBMS identity. The following section provides information on how heterogeneous connections are created:1. At connection pool initialization, the physical JDBC connections based on the configured or default “initial capacity” are created with the configured default DBMS credential of the data source.2. An application tries to get a connection from a data source.3a. If “use database credentials” is not enabled, the user specified in getConnection is mapped to a DBMS credential, as described earlier.  If the credential map doesn’t have a matching user, the default DBMS credential is used from the datasource descriptor.3b. If “use database credentials” is enabled, the user and password specified in getConnection are used directly.4. The connection pool is searched for a connection with a matching DBMS credential.5. If a match is found, the connection is reserved and returned to the application.6. If no match is found, a connection is created or reused based on the maximum capacity of the pool: - If the maximum capacity has not been reached, a new connection is created with the DBMS credential, reserved, and returned to the application.- If the pool has reached maximum capacity, based on the least recently used (LRU) algorithm, a physical connection is selected from the pool and destroyed. A new connection is created with the DBMS credential, reserved, and returned to the application. It should be clear that finding a matching connection is more expensive than a homogeneous pool.  Destroying a connection and getting a new one is very expensive.  If you can use a normal homogeneous pool or one of the light-weight options (client identity or an Oracle proxy connection), those should be used instead of identity based pooling. Regardless of how physical connections are created, each physical connection in the pool has its own DBMS credential information maintained by the pool. Once a physical connection is reserved by the pool, it does not change its DBMS credential even if the current thread changes its WebLogic user credential and continues to use the same connection. To configure this feature, select Enable Identity Based Connection Pooling.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/EnableIdentityBasedConnectionPooling.html  "Enable identity-based connection pooling for a JDBC data source" in Oracle WebLogic Server Administration Console Help. You must make the following changes to use Logging Last Resource (LLR) transaction optimization with Identity-based Pooling to get around the problem that multiple users will be accessing the associated transaction table.- You must configure a custom schema for LLR using a fully qualified LLR table name. All LLR connections will then use the named schema rather than the default schema when accessing the LLR transaction table.  - Use database specific administration tools to grant permission to access the named LLR table to all users that could access this table via a global transaction. By default, the LLR table is created during boot by the user configured for the connection in the data source. In most cases, the database will only allow access to this user and not allow access to mapped users. Connections within Transactions Now that we have covered the behavior of all of these various options, it’s time to discuss the exception to all of the rules.  When you get a connection within a transaction, it is associated with the transaction context on a particular WLS instance. When getting a connection with a data source configured with non-XA LLR or 1PC (using the JTS driver) with global transactions, the first connection obtained within the transaction is returned on subsequent connection requests regardless of the values of username/password specified and independent of the associated proxy user session, if any. The connection must be shared among all users of the connection when using LLR or 1PC. For XA data sources, the first connection obtained within the global transaction is returned on subsequent connection requests within the application server, regardless of the values of username/password specified and independent of the associated proxy user session, if any.  The connection must be shared among all users of the connection within a global transaction within the application server/JVM.

    Read the article

  • A follow up to yesterday

    - by GrumpyOldDBA
    As I have been asked,  here to tidy up yesterdays post is the procedure my startup procedure calls along with the logging table deployed in the DBA database. Just to muddy the water further I have routines for remotely calling the DBAMessages table through a remote server to send out email from a central server!! Just to explain that I have been ( previously ) limited to only using one Server to send email alerts for multiple Servers so I attempt to code to deal with all possible circumstances...(read more)

    Read the article

  • Alternatives to sql like databases

    - by user613326
    Well i was wondering these days computers usually have 2GB or 4GB memory I like to use some secure client server model, and well an sql database is likely candidate. On the other hand i only have about 8000 records, who will not frequently be read or written in total they would consume less then 16 Megabyte. And it made me wonder what would be good secure options in a windows environment to store the data work with it multi-client single server model, without using SQL or mysql Would for well such a small amount of data maybe other ideas better ? Because i like to keep maintenance as simple as possible (no administrators would need to know sql maintenance, as they dont know databases in my target environment) Maybe storing in xml files or.. something else. Just wonder how others would go if ease of administration is the main goal. Oh and it should be secure to, the client server data must be a bit secure (maybe NTLM files shares https or...etc)

    Read the article

  • From where can I install my nVidia drivers? [closed]

    - by Arthur Wulf White
    Possible Duplicate: How do I install extra drivers? Additional Drivers tool in Ubuntu 12.10? I have read here and here that I should be able to install drivers. So I'm finding the Additional Drivers menu. To install nVidia driver. I started looking for System -> Administration and not finding it. I have an icon that says System Settings and it has any option related to drivers. NOTE: I am using Ubuntu 12.10.

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >