Search Results

Search found 43979 results on 1760 pages for 'sql down under'.

Page 1054/1760 | < Previous Page | 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061  | Next Page >

  • Lightweight outlook search

    - by Simon Johnson
    Does anybody know of a plugin for Outlook 2003 that makes the search fast and accurate? I tried using Microsoft Search and Google Desktop Search but I find that these product slow down my development machine too much. I heard of Lookout but it appears that Microsoft has pulled it.

    Read the article

  • Dedicated hard disk for Informix SE dbname.dbs files & dedicated ramdisk for /tmp files.

    - by Frank Computer
    INFORMIX-SE 7.2: I would like to dedicate a hard disk, exclusively for my dbname.dbs directory which holds all the .dat and .idx files, and create a ramdisk for my /tmp temporary files in order to improve performance. I would also like to strip down the OS from any unecessary files and processes to minimize overhead for my dedicated application. Is this a good idea and are there any roadmaps for accomplishing this?

    Read the article

  • Hiding mapped drives for all users but letting programs access them

    - by AgainstClint
    What I'm looking for (and not sure if it's possible) is that we have 16 mapped network drives that are mapped when any user logs on, what I would like is to cut this down to just one visible drive yet leaving the other ones still usable to certain programs. I would just un-map them, however one of our constantly used programs writes to almost all of the drive letters so they need to be mapped for just that program, however they do not need to be visible to the user. Is this possible?

    Read the article

  • Running 1 DC physically and a second virtually

    - by stead1984
    I plan to create my first DC and forest on a physical server, then I want to run a second DC on a virtual server that will replicate the first DC. I understand that this will provide redundancy for AD that if the first domain controller went down the second would resume until the first is back online. Would this work and how?

    Read the article

  • Mysql Servers for Attendance System

    - by foo
    I'm building an attendance system. There are about 20 places where people will check in and check out using Mifare 1K Card. It will use MySQL as the database. The system will display something like "#ID IN: 800AM" when the first time the user checks in and "#ID OUT: 400PM" when the user checks out. For this to work, all the databases need to be synchronized with each other all the times. For an example, if user A went to location #1 to check in but by the time he wants to return home, the server at location #1 went down, he needs to go to location #2 or the nearest server to check out. The server at location #2 should display '#ID OUT: 400PM" and not "#ID IN: 400PM" since he's already checked in. So, what should I use to ensure this idea will work? My main concern is with the network (other department manages it) which is very unpredictable. It just love to go down anytime it wants to. Update LOL, didn't realize my question is not clear, just noticed it when you guys pointed it out, sorry about that. My real question is, how can I configure my MySQL to be synchronized with each other (20 servers)? MySQL cluster ? (tried reading about it, but I'm not sure if it's the right thing to do) My current setup (first phase): Local database for each server OS: Slackware A main server that keeps track which staff is at which server A web based front end for the user to see their history (which connects to the server based on their records) Main Pros No worries about network problems since it is a local database Main Cons A user can only check in and out at the same server. Databases/Servers are not connected with each others. Have to add the user to each server if the users want to check in at different locations. Which means, if he wants to go to location A, he must be checked out from location A first and then check in at location B. The server at location B didn't know that the user has checked in before at A. By the way, I've already centralized my NTP to a local server. About the network, let's just say, I don't have the authority to make changes so that the network will be better. The network won't effect all 20 servers at once, usually, just a few of them for several times a week. If there are anything else you would like me to answer, please just ask.

    Read the article

  • Git: push via ssh to a root owned repository with ssh root logins disabled

    - by anthonysomerset
    is that even possible? Summary, i'm running puppet master on a server and ideally we want root logins via ssh disabled, we want to force all access via sudo if root access required however we have puppet installed using a git repo to manage the manifests, this repo is currently owned by root and currently i only know of 2 solutions (less ideal) allow root access via key auth only - if so, what can i lock it down to to only allow the git push commands? own the repo in /etc/puppet as a different owner - will puppet work reliably with this?

    Read the article

  • importing a mysqldump file does not import triggers due to some permission problems

    - by user51792
    Hello, Whenever I try to import a mysqldump export on another server, the triggers are never created, and if I remember correctly, I get an error messages that super user permission is required. IF I remove the definer it usually works but if there is a way I would prefer not having to edit the sql file. When I just simply copy over the mdi, mdy and frm files everything works perfectly. How could I import a mysqldump file so that triggers are created as well?

    Read the article

  • Windows Server 2008 R2 Firewall - Interface specific rules

    - by Mehmet Ergut
    I'm trying to define per interface rules, much like it was in Server 2003. We will be replacing our old 2003 server with a new 2008 R2 server. The server runs IIS and SQL Server. It's a dedicated server at the hosting company. We use a OpenVPN connection from the office to access SQL server, RDesktop, FTP and other administrative services. Only http and ssh is listening on the public interface. On the old server running 2003, I was able to define global rules for http and ssh, and allow other services only on the vpn interface. I can't find a way to do the same on 2008 R2. I understand that there is the Network Location Awareness service, firewall rules are applied according to the current network location. But I don't understand the purpose of this on a server. The only close solution I found is to set the scope on the firewall rule and restrict remote ip addresses to the private subnet of the office. But the ports will still be listening on the public interface. So how can I restrict a firewall rule to the connections coming from the vpn interface ? A note on this page states that scoping a rule to an interface does not exist anymore: In earlier versions of Windows, many of these command accepted a parameter called interface. This parameter is not supported in the firewall context in Windows Vista or later versions of Windows. I can't believe that they simply decided to remove a core firewall functionality that every firewall has. There must be a way to restrict a rule to an interface. Any ideas ? I'm still unable to find an adequate solution to my problem. So for now, my workaround is this: Administrative services listen on VPN IP address Firewall rules restrict the scope to the local IP address of VPN Public services listen on all interfaces, no scope restriction on firewall rules This is not optimal, if I change the IP address of the VPN, I need to edit the firewall rules too. It won't be the case if the rules were bound to the interface.

    Read the article

  • Is it possible to rsync your web site to another backup server and use the same .htaccess files?

    - by stephenmm
    I am trying to use rsync to replicate all the files from one web server to another server that could act as a backup if the first one went down. The problem I am having is that the .htaccess file requires the AuthUserFile to have the fully quallified path to the .htpasswd file and I cannot make the paths the same on the two machines. Does anyone know how I might use the same .htaccess file on two different servers? Thanks for any help that can be provided.

    Read the article

  • Navicat 8/9 crashes my Debian/Linux

    - by meder
    I run Debian 5 + Gnome and it seems that after I made certain updates with aptitude, whenever I run a query in Navicat ( sql program that runs on WINE ), as the results are being presented linux goes into the command line and the GUI dies and it restarts and it asks me to login again ( all my programs crash ). Has anyone experienced this before? Or does anyone have a clue as to how I could go about debugging this? I suspect it's some issue with Gnome and WINE, but I'm not sure.

    Read the article

  • Connecting to a UltraVNC server in one click

    - by Fabian
    I have to following setup on two Win7 machines connected over LAN: UltraVNC Server <---- UltraVNC Viewer Since I'm only interested into connecting always to the same server, I was wondering if it is possible to start the viewer and connect to the server with only one-click. Start Viewer + Connect + Enter Password (three steps) Since I've already figured out that I can automate the last step with the -password argument, I'm down to two steps: Start Viewer + Connect (two steps) How can I tell the UltraVNC viewer to automatically connect?

    Read the article

  • Centos mysql version is 5.5 however PHPmyadmin still says version 5.1

    - by Marc Rasmussen
    When i run the following in my console: [root@****~]# mysql -u root -p -e 'SELECT VERSION();' Enter password: +-----------+ | VERSION() | +-----------+ | 5.5.39 | +-----------+ Which should be the correct version. However when i enter my PHPMYADMIN on my server it has the following specs: Server: Localhost via UNIX socket Program: MySQL Programversion: 5.1.73 - Source distribution So which version is the correct one and how do i make sure that the database is running on 5.5? Note I have already restarted sql several times without any changes

    Read the article

  • Nexus One vs Xperia X10

    - by Mark
    Trying to decide which phone to get; I think I've narrowed it down to one of these two. The Xperia X10 seems pretty sweet except for one drawback: the phone/voice quality seems to be lacking. I haven't heard anything about the quality on the Nexus though. Otherwise they seem pretty neck and neck. What do you guys think?

    Read the article

  • PC Safari Dropdown Site List

    - by ikurtz
    please excuse me if this is too easy and i just havent looked in the right places. the issue is this: when i use safari, i havent found a drop down list like you have on IE addresbar. in IE you can dropdown this list and choose sites. in Safari i havent found such dropdown list. is it avaliable? how do i enable it? thanks.

    Read the article

  • How to reliably recieve message from AWS that my instance was rebooted / terminated / stopped?

    - by Andrew Smith
    I have Nagios, and I want it to stop monitoring instances when they are stopped from the console. The requirements are: The message passed from AWS is 100R% reliable, e.g. when Nagios is down, and the message cannot be delivered, it will be re-delivered promptly when Nagios is up The message will pass quickly There is no need to scan status of all instances via EC2 API all the time, but only once a while Many thanks!

    Read the article

  • Deployment/provisioning tool for commercial applications (not developed in-house)

    - by mfinni
    I help manage a few hosted commercial applications, and we have a lot of manual processes involved when doing new customer-instance deployments into the shared (multitenant) environment. Allow me to describe the most relevant features, and then we can talk about the tools. We have an application on AIX, that requires dozens of changes to config files (some plain text, some XML) as well as a good number of commands to be run on multiple servers - some to start the new instance, some to restart our shared authentication and reporting engines, etc. The config changes follow templates, of course. The servers in question will also depend on the initial conditions specified by the implementer/deployer - we may choose to deploy a given customer to our servers in Europe, or one set of servers may be active-active whereas a different set of servers is active-passive - in short, there's a lot of complications. We have another application that run on IIS 6 and SQL. The DBAs don't want any automation of the SQL components and that's fine with me, but automating the IIS bit would be great. For a new customer instance, we make a filesystem copy of a template Virtual Directory target named after the new customer, make a new AppPool to match, edit a VirDir template .xml file to replace the filepaths and AppPool names with the new ones, and then make a new VirDir from the modified template XML to point to the new filesystem folder and app pool. For the first case, something like ControlTier or Chef might be good. For the second, the new(ish) Web Deploy from MS would probably do a good job. Has anyone used these tools or others to do something similar for applications? More of a nice-to-have, not a fixed requirement - Has anyone used anything that works on both platforms? I'm looking for something free, because the official word is that within a year, we will have whatever HP has renamed the OpsWare suite, which should be able to do stuff like this. Edit - based on someone's suggestion, looking at CFengine for the AIX application, it doesn't seem to address my pain. The problem isn't keeping a given config synced across dozens of servers, we have rsync for that. The problem is that onboarding a new customer instance touches dozens of files, putting pieces of the same or similar information into them - some are new stanzas in existing files, some are new files, and some are new directories. This is a several-hours-long process that is also error-prone because it's mostly done by hand. I guess I'm looking for config-file generation and management. I have built a small Perl script to do something similar for a much smaller case - it binds a CSV file into variables, and then does a copy-and-search-and-replace from a set of template config files. I could probably do the same here.

    Read the article

  • How Do I Automatically Update My Database Nightly

    - by Russ
    Currently, every day before I start work, I complete the following procedure: ssh to the production server gzip our daily database dump file scp the gzipped dump file over to my computer gunzip the dump file dropdb mydatabase createdb mydatabase psql mydatabase < dump.sql Is it possible (I'm sure it is) to automate this process on Mac OSX? This way it is done by the time I get to work in the morning. If so, what is the quickest and easiest way

    Read the article

  • Linux Software RAID1 Rebuild Completes, but after reboot, its degraded again

    - by zimmy6996
    I have been beating my head with an issue here, and I'm now turning to the internet for help. I have a system running Mandrake Linux, with the following configuration: /dev/hda - This is a IDE drive. Has some partitions on it that boot the system and make up most of the file system. /dev/sda - This is drive 1 of 2 for a software raid /dev/md0 /dev/sdb - This is drive 2 of 2 for a software raid /dev/md0 md0 gets mounted but fstab as /data-storage, so it is not critical to the systems ability to boot. We can comment it out of fstab, and the system works just fine either way. The problem is, we have a failed sdb drive. So I shut the box down, and have pulled the failed disk and installed a new disk. When the system boots up, /proc/mdstat shows only sda as part of the raid. I then run the various command to rebuild the RAID to /dev/sdb. Everything rebuilds correctly, and upon completion, you look at /proc/mdstat and it shows 2 drives sda1(0) and sdb1(1). Everything looks great. Then you reboot the box ... UGH!!! Once rebooted, sdb is missing again from the RAID. It is like the rebuild never happened. I can walk through the commands to rebuild it again, and it will work, but again, after reboot, the box seems to make sdb just vanish! The real odd thing is, if after reboot, I pull sda out of the box, and try to get the system to load with the rebuilt sdb drive in the system, and when I do, the system actually throws and error just after grub, and says something about drive error, and the system has to shut down. Thoughts??? I'm starting to wonder if grub has something to do with this mess. That the drive isn't being setup within grub to be visible at boot? This RAID array isn't necessary for the system to boot, but when the replacement drive is in there, without SDA it won't boot system, so it makes me believe there is something to that. On top of that, there just seems to be something wonky here the drive falling off of RAID after reboot. I've hit the point of pounding my head on the keyboard. Any help would be greatly appreciated!!!

    Read the article

  • Is there a performance pentalty using in-place models/families in a large Revit project

    - by Jaips
    (I'm quite new at Revit so apologies if my concepts are a bit inaccurate) I have heard using, in-place models in Revit projects is poor practice since it can slow down a large project. However i noticed Revit also organising inplace models lumping them with the rest of the families. So my question is: Is there really any performance penalty/benefit to be had by inserting families from an external file as opposed to creating inplace models in a Revit project?

    Read the article

  • Creating a network link between 2 very close buildings

    - by Daniel Johnson
    I have a charity who have two adjacent medium sized modern detached houses (in the UK): the buildings stand next to each other and are less than 5 metres apart. They have DSL connected to a single computer in one of the buildings. They want to add a network with wireless, and want it to work across both buildings. Being a charity they need to keep costs down. The network would be used for sharing Word documents, e-mail, browsing and skyping. My initial thoughts were to connect the buildings with fibre. So: Option 1 Use fibre between the buildings. Sufficient cable and two TP-LINK MC100CM Fast Ethernet Media Converters. Cost ~£80.00. But there is the extra cost and hassle of running the cable down and up the external walls, lifting and relaying paving, and burying underground. Never having fitted fibre I'm also a little worried about going up the wall and then bending the cable at 90 degrees to go through the wall and into the building. Option 2 Use two TP-Link TL-WA7510N High Powered Outdoor 5Ghz 15dBi Wireless antennas to connect the buildings. There is a clear line of sight at first floor level. Cost ~£100. And much easier to fit than fibre! Is using the TL-WA7510Ns overkill? Is there something more suitable? I had hoped to use some Netgear stuff, e.g. two DGN2200, one in each house and also use them to provide the wireless link between the buildings. However, in bridge mode wireless client association is not available and repeater mode with client association only supports WEP security which isn't strong enough. Is there something similar that would be up to the job? Option 3 Connect the buildings with UTP cable. My concerns here are risk of electric shock due to a difference of potential between the buildings (or are they so close this shouldn't be an issue) and protection from lightning strikes. Is fitting lighting arrestors expensive? And what can be done to ameliorate against the risk of shock? This all falls outside my area of expertise so I would really appreciate some advice.

    Read the article

  • How Do I Automatically Update My Database Nightly

    - by Russ
    Currently, every day before I start work, I complete the following procedure: ssh to the production server gzip our daily database dump file scp the gzipped dump file over to my computer gunzip the dump file dropdb mydatabase createdb mydatabase psql mydatabase < dump.sql Is it possible (I'm sure it is) to automate this process on Mac OSX? This way it is done by the time I get to work in the morning. If so, what is the quickest and easiest way?

    Read the article

  • How can I disable the css {position:fixed} side-bar on Slashdot?

    - by Tom
    When I look at a story on Slashdot, I see a side-bar that sits constantly in the upper-left corner. Every time I open a story, I have to click on the "slash" sign in its upper-right corner. How can I disable it, so it will always have css position absolute, relative or static? Is there an option for it? Because I find it is slowing my browser down when I am scrolling the page. Thank you, Tom

    Read the article

< Previous Page | 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061  | Next Page >