Search Results

Search found 10445 results on 418 pages for 'basic'.

Page 302/418 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • Migrating ODBC information through a batch file

    - by DeskSide
    I am a desktop support technician currently working on a large scale migration project across multiple sites. I am looking at a way to transfer ODBC entries from Windows XP to Windows 7. If anyone knows of a program or anything prebuilt that already does this, please redirect me. I've already looked but haven't found anything, so I'm trying to build my own. I know enough basic programming to read the work of others and monkey around with something that already exists, but not much else. I have come across a custom batch file written at one site that (among other things) exports ODBC information from the old computer and stores it on a server (labelled as y: through net use at the beginning of the file), then later transfers it from the server to a new computer. The pre-existing code is for Windows XP to XP migrations. Here are the pertinate bits of code: echo Exporting ODBC Information start /wait regedit.exe /e "y:\%username%\odbc.reg" HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI (and later on) echo Importing ODBC start /wait regedit /s "y:\%username%\odbc.reg" We are now migrating from Windows XP to 7, and this part of the batch file still seems to work for this particular site, where Oracle 8i and 10g are used. I'm looking to use my cut down version of this code at multiple sites, and I'm wondering if the same lines of code will still work for anything other than Oracle. Also, my research on this issue has shown that there are different locations in 64 bit operating systems for 32/64 bit entries, and I'm wondering what effect that would have on the code. Could I copy the same data to both parts of the registry, in hopes of catching everything? Any assistance would be appreciated. Thank you for your time.

    Read the article

  • Is there a monitoring software suite that will alert me if it has received no activity in a time period?

    - by matt b
    This might be a very basic question, but I am not very familiar with the exact features of Nagios versus Munin versus other monitoring tools. Let's say we have a process that needs to run daily for some very important infrastructure reasons. We've had cases where the process did not run or was otherwise down for a number of days before anyone noticed. I'd like to set up a system that will enable me to easily know when the daily run did not take place for some reason. I can set up this process to send an email on every successful run (or every failed run), but I do not trust that the people receiving this email would notice an absence of an "I'm OK" message. What I am envisioning is some type of "tripwire" service which this V.I.P. (very-important-process) can send a status message to each time it runs, whether successfully or not; and if the "tripwire" service has not received any word from the VIP within a configurable amount of time, it can then send an alert to someone. (The difference between what I envision and the first approach I outlined is a service that sends a message only in abnormal conditions, rather than a service that sends messages each day that the status is normal/OK). Can Nagios be set up to send an alert like this, if it has not heard from a certain service/device/process in N days? Is there another tool out there which does have this feature?

    Read the article

  • rsync per-site configuration file?

    - by Scott
    I know how to configure a per-site entry for ssh, but is there any kind of a client configuration for rsync that allows per-site configuration options and aliases or similar shortcuts like the .ssh/config? I'm curious because I have a minimal ssh server installed on my android phone and I also have a minimal rsync tool on it as well. I'm getting tired of having to root login onto the phone and sym-link both tools to standard places the android OS looks for executables as the ssh server is bare bones and has a typical *bear multi-link binary for the basic unix commands (that does not include rsync) I end up having to include --rsync-path=/path/to/rsync/android/files/rsync every time I want to do any rsyncing of the files on my phone, but this path is always the same. I've gotten around it in the meantime with a glob approach in a shell script wrapper, but this sometimes limits the customization I can do with the rsync call. I'm just wondering if there is anything similar to the .ssh/config file where I can create an alias for my phone (e.g. 'android') where specifying rsync android:/mnt/sdcard will automatically assume --rsync-path=/blah/blah/blah --no-g --no-p --no-t etc. Tre`

    Read the article

  • Euro character messed up during FTP transfer

    - by djechelon
    My customer is using a very outdated ecommerce management system on my hosting service. For that product, no support is being provided anymore by the vendor. Brief explanation: the shop website, that claims to run under LAMP stack, is built by an old Visual Basic Windows application running on MS Access. The user constructs the shop, defines the HTML template, adds products and categories, etc. Then the VB exe builds the PHP pages (one for each template page) and the SQL script to run on MySQL. It also uploads everything via FTP and runs the installation/upgrade script on its own. The problem Browsing the website, many products' descriptions are cut before the euro sign. For example, what was supposed to be "Product price €1000" becomes "Product price" The analysis MySQL contains a cutted description until the € sign, so it's not PHP fault The Access databases contain full description with € sign, so it's not fault of the webmaster writing bad description or eDisplay cutting them The SQL that will run once the site gets uploaded, stored on my local machine before upload, contains the € sign The same script, after being FTPed by eDisplay and opened with nano from SSH, shows the € sign messed up like this: ^À vsftpd log reports (obfuscated for privacy) Sat Dec 15 11:16:57 2012 22 xxx.xxx.128.13 1112727 /srv/www/domains/xxxxxx.it/htdocs/db.sql b _ i r xxxxxxx ftp 0 * c which seems to be a binary transfer (and also a huge security vulnerability because you can download the whole database from unauthenticated HTTP) The eDisplay internal FTP client provides no option for ascii/binary transfer modes [Add] Trying to manually upload the SQL file via SFTP shows messing up euro [Add2] Trying to manually upload using Xftp client with explicit ASCII mode doesn't fix too It looks like the file gets uploaded as binary. Perhaps on the customer's previous host it all worked fine because that was a Windows host. The server It's an Azure virtual machine running openSUSE 12.2 with both vsftpd and openSSH The question Without asking the customer to manually upload files using FileZilla or replacing € with &euro;, because he refuses, what can I do on server side to prevent vsftpd to screw up euro sign?

    Read the article

  • mod_perl custom configuration directives don't work when placed in .htaccess and there is <Location>

    - by al_l_ex
    I'm trying to complete Redmine's feature request #2693: Use Redmine.pm to authenticate for any directory (1). I have not much knowledge on all these things and need help. Redmine uses mod_perl module Redmine.pm for authentication & authorization. This module defines several custom configuration directives. I've successfully modified patch from (1) and it works when all config is in <Location>: <Location /digischrank/test> AuthType basic AuthName "Digischrank Test" Require valid-user PerlAccessHandler Apache::Authn::Redmine::access_handler PerlAuthenHandler Apache::Authn::Redmine::authen_handler RedmineDSN "DBI:mysql:database=SomedaTaBAse;host=localhost" RedmineDbUser "SoMeuSer" RedmineDbPass "SomePaSS" RedmineProject "digischrank" </Location> But when I move one of these directives (RedmineProject, see (1)) in .htaccess file, Redmine.pm doesn't see it! I've tried to change <Location> to <Directory> and add AllowOverride All. Directives from .htaccess is visible, but remaining ones from <Directory> - not. I don't want to move all directives to each .htaccess. When I add <Location> in addition to <Directory>, again - only directives from <Location> are visible. As far as I know, directives should be merged. I miss something?

    Read the article

  • Never getting a JSON response when running server-side PHP proxy script but I do with others

    - by Dohk
    I'm on PHP 5.3.4 and Apache 2.2 btw So I'm using (or trying to use) Simple PHP Proxy (Simple PHP Proxy) I enter a URL at his example page at SPP Example Page and it works fine, I see the JSON response and all the headers. However, when I copy the exact URL, only changing the URL to now have localhost, I get both empty headers and no JSON. Assuming that the script on his site is the same I downloaded, could this be due to a multitude of things or a setting in Apache and/or the PHP ini? So for example: benalman.com/code/projects/php-simple-proxy/ba-simple-proxy.php?url=http://github.com/&full_headers=1&full_status=1 That will get me a ton of info back Now changing to localhost http://localhost/ba-simple-proxy.php?url=http://github.com/&full_headers=1&full_status=1 {"headers":[],"status":{"url":"https:\/\/github.com\/","content_type":"text\/html","http_code":301,"header_size":194,"request_size":182,"filetime":-1,"ssl_verify_result":0,"redirect_count":1,"total_time":0.094,"namelookup_time":0,"connect_time":0.047,"pretransfer_time":0,"size_upload":0,"size_download":185,"speed_download":1968,"speed_upload":0,"download_content_length":185,"upload_content_length":0,"starttransfer_time":0,"redirect_time":0.047,"certinfo":[]},"contents":null} I even went basic and just used some curl and of course, empty objects being returned other than false for my content and the url I set in my JSON. Any help is deeply appreciated or any ideas.

    Read the article

  • Firefox url / link to a group of saved bookmarks?

    - by This_Is_Fun
    In Firefox you can easily save a group of tabs together. When (re-)accessing this group, the 'cascading' bookmark menu shows each individual bookmark (and under a line) it says "open all in tabs" I'm looking for a way to launch those tabs without going up through the bookmark menu. Possible options: A) Record a simple macro w/ any number of "superuser" utilities* ('A' is not the preferred option, since many 'little-macros' are hard to keep track of) b) Use Autohotkey (similar to option 'A' and more flexible once you learn the basics) c) How does Firefox load all those tabs? The info must be stored somewhere (as a type of URL??) Quick Summary: The moment I click on "open all in tabs", I am clicking on something very similar to a hyper-link. How do I find the content (exact code) of that 'hyper-link', and / or "How do I easily launch the tabs?" .. . New EDIT #1: I'm looking for a way to launch those tabs without going up through the bookmark menu, or cluttering the bookmarks toolbar which I hide anyway :o) .. . New EDIT #2: I tried to keep the question simple and not mentioning Autohotkey programming. The objective is to launch all tabs using a button on an AHK gui. When grawity said, "It's just an ordinary folder containing ordinary bookmarks," he (she) reminds me I can easily find the folder / Now how to launch to urls inside that folder? .. FYI: (Basic-level) AHK works like this: ; Open one folder ButtonWinMerge_Files: Run, C:\Program Files\WinMerge\ Return .. ; Use default web browser for one link ButtonGoogle: Run, http://google.com Return .. . Question still open: The moment I click on "open all in tabs", I am clicking on something very similar to a hyper-link. "How to 'replicate' the way Firefox launches the tabs with one click?"

    Read the article

  • Diagnosing RAM issues

    - by TaylorND
    I have an old Acer Aspire T180 desktop. The specs are as follows: AMD Athlon 64 3800+ 2.4GHz 1GB DDR2 SDRAM 160GB DVD-Writer (DVD±R/±RW) Gigabit Ethernet 17" Active Matrix TFT Color LCD Windows Vista Home Basic Mini-tower AST180-UA381B According to the information in the computer's documentation the computer comes with 1 GB of RAM. It has two DDR2 SDRAM sticks. I used to have Windows Vista installed. Then I removed it and install Windows 7, and now I have since removed Windows 7 and installed Windows XP. According to Windows XP with both RAM sticks in the computer has 768 MB. Isn't this supposed to be 1 GB of RAM or 1024 MB of RAM? Is the amount of RAM installed only partly used by the Operating System? Is there's something I'm missing? If I remove either one of the RAM sticks I'm left with 448 MB of RAM. These numbers don't seem to add up. If each of the RAM sticks contains at least 448 MB of RAM shouldn't they (both being in) provide 896 MB of RAM. Even then, isn't that less than a GB of RAM? I'm not too experienced in hardware so I thought this would be the best place to ask. As a follow up question, is the RAM I have enough to run/multitask with Windows XP efficiently? I plan to do a lot of computing with the system (although not gaming), should I invest in more RAM?

    Read the article

  • From a performance standpoint, is there a preferred way to set up Thunderbird message filters?

    - by Steve V.
    When I used Kontact, I used filters heavily to sort my incoming email into various folders. Since switching to Thunderbird, I've been slowly recreating these filters as new mail arrives. This seems like the perfect time to rethink how I use message filters. The way I see it, there are two basic ways to filter messages. Either I can have lots of filters, or I can have lots of criteria. An example is in order. Assume that I get emails from four people (Boss, CEO, Intern, and StackOverflow) that I want to sort into two folders, "Stack" and "Work" Option A: Filter 1: if FROM contains "Boss" -> move to "Work" Filter 2: if FROM contains "CEO" -> move to "Work" Filter 3: if FROM contains "Intern" -> move to "Work" Filter 4: if FROM contains "StackOverflow" -> move to "Stack" Option B: Filter 1: if FROM contains "Boss" OR FROM contains "CEO" OR FROM contains "Intern" -> move to "Work" Filter 2: if FROM contains "StackOverflow" -> move to "Stack" Assuming that when I'm done, I'll have about a hundred different criteria to filter on, is one of these methods better than the other from a performance standpoint?

    Read the article

  • Why is my new PC so slow at startup?

    - by rumtscho
    Bought a new PC this weekend, and it works really good. Only I have one big problem: startup time. Its BIOS needs 62 sec to load, then from Grub start to pw entering screen it's another 26 sec. I think this is a lot, because my old PC needs 34 sec for BIOS and another 8 sec to pw screen. After I enter the pw, the desktop is usable with practically no delay on both. The new PC is a core i7-930, running a Lucid Lynx 64 bit from a Intel Postville SSD (no internal HDs). The old PC is a Pentium 4 celeron (forgot the clock speed) running a Lucid Lynx 32 bit from an ATA 100 hard drive. Neither PC is overclocked. The new one has boot sequence 1.DVD ROM, 2.SSD (connected over SATA in AHCI mode), 3. removable drive. The old one boots from 1. DVD ROM, 2. HDD, 3. Floppy. Neither has a second OS installed. The new one has less software installed than the old one (I think), but the boot time difference was noticeable even before I made any installs. As far as I know, just the SSD should be enough to make a noticeable difference in boot time. I thought that having a good mainboard on the new PC as opposed to the basic office model on the old one would also mean a faster loading BIOS. If these assumptions are right, I guess I must have misconfigured something in the BIOS of the new PC. How should I configure it for a fast boot? It has an ASUS P6X58D board with an AMI BIOS, if you need the BIOS revision number I could post that too.

    Read the article

  • I (stupidly) converted a TrueCrypt encrypted disk to GPT in Disk Management: now TrueCrypt won't mount it

    - by asilentfire
    Backstory: After moving a Macrium Reflect disk image from my TrueCrypt external drive (with whole disk encryption) onto a unencrypted drive and using Windows PE with Macrium Reflect to restore my internal disk to the recovery image on the external unencrypted drive, my Windows 8 failed to boot. I then went back and also recovered the System Partition (looking now, it is currently EFI), but I still couldn't boot into my backup.. I was in a hurry to get online for something so I just did a clean install of Windows 8, without the backup.. After I installed Windows 8, I went into Disk Management out of curiosity to see if there were other partitions with Windows 8 that Macruim might have missed, and there is (by default) a Recovery Partition of 100MB. My memory of this is hazy, as I was trying to get up and running for an exam at 4 AM: Something in Disk Management prompted me to convert my encrypted external drive to GPT.. I have no idea why I did this, but I went ahead and allowed it to convert my TrueCrypt drive to GPT. Now, I can't mount the drive in TrueCrypt.. Disk Management sees it as Disk 1, Basic, and Unallocated. I tried converting it back to MBR with Disk Management, but no dice with TrueCrypt :( If I try to mount the disk in TrueCrypt I get the message: Incorrect password or not a TrueCrypt volume I should never have messed with a Truecrypt drive in Disk Management, but I did. I have important college work in that drive, and fear I have lost it forever. PLEASE HELP

    Read the article

  • Running computer from separate room

    - by Dan
    I want my computer to be in the basement, but to use it on the first floor. Which cables should I run through the floor? Can some be wireless or other methods? Here are some of the options I've thought of: Basic: Run DVI, usb (mouse), usb (keyboard), and audio cable (4 cables) USB Hub option: Run DVI, and 1 usb, then using a usb hub split it into mouse, keyboard, maybe even audio (2-3 cables) HDMI Option: If I get a new video card and monitor that supports HDMI, would I be able to run both audio and video through it? Would the monitor have to have an audio out? Also there is a lot of extra bandwidth in the HDMI cables, could I send two monitors on 1 cable or would I have to use 2 cables? How about sending mouse/keyboard through the HDMI cable? I see a lot of monitors with USB hubs built in, but I assume I'd still have to wire HDMI + 1 USB cable to use the USB hubs? X Terminal Machine/Thin Client: I don't really know much about this option. Not sure if it would allow me to run graphics acceleration and watch movies, does anyone know more details about what this would allow me to do? Other options: Any other ways to do this? Can any of this be wireless?

    Read the article

  • Does Google sometimes ignore "special" characters, possibly depending on your location or font type settings? [closed]

    - by RLH
    TLDR Google tends to ignore special characters in my search strings. Is there anything that I can do about it and is it, possibly, happening because Google makes certain assumptions based off of my default text-encoding settings and my location? I just posted this question over at StackOverflow. I had found a C preprocessor that I'd never seen before. As I should have done, I Googled it and tried to find out further information. I attempted various search terms which were all variations of "C Operator ##" (some times with and some times without the double-quotes.) Google didn't bring back anything of use so I posted my question on SO. As you can see from the comments, someone mentioned a search string (ironically one which I did try to search) and stated that I could have even hit the "I'm feeling lucky" button and have gotten my answer. The problem is I did search that, and the results that I received were far more basic and even after following the top results and searching the resulting pages, I could find nothing referencing the string "##". I'm not posting this question to complain but it does provide an empirical example of something I've seen before that really bugs me-- Google often ignores special characters in my search strings and the results are often useless. As a developer I often need to search for string values containing non-alphanumeric characters. Some characters (like the underscore or hyphen) can be used without trouble. However, other characters (such as the ampersand, carat, tilde and pound sign) are often ignored in my query strings. Is there a way to prevent this from happening so that I can get meaningful results from Google? NOTE I stay logged into Google and I live in the US. I wonder if Google detects some form of text-encoding setting or derives my results based off of certain, localized text-based assumptions. Regardless, I would like to for Google to search for what I give it. Is there anything that I can do to improve my results?

    Read the article

  • How to fix UNMOUNTABLE_BOOT_VOLUME (0x000000ED) on my Windows XP DELL laptop?

    - by Neil
    I have a Dell Latitude D410. Running Windows XP. I am receiving the STOP: 0x000000ED (0X899CF030,0XC0000185,0X00000000,0X00000000) Blue screen. Initially, I tried everything specified with the Microsoft KB articles. At this time, I was able to boot into the general safemode. I pulled the hard drive and was able to run chkdsk on it- it noted that it had fixed some errors, but I was still unable to boot. I put a brand new hard drive in the laptop. Windows XP installation worked up until the reboot, at which time the exact same error message came back up. What I have tried (all since the new hard drive was installed): chkdsk /R All suggested solutions in Microsoft KB articles Reseating RAM Opened laptop, reseated all connectors, looked for signs of damage (saw none) Reset BIOS options to default Ran the basic Dell diagnostics I have looked at the current entry:How can I boot XP after receiving stop error 0x000000ED - I am currently in the process of downloading the Ultimate Boot CD to use as a test, but I am not holding out a lot of hope as I really doubt this brand new Hard Drive is bad. Can anyone think of other areas I am missing? Ran MEMTEST86+ V4.10 for 15 passes (overnight). 0 Errors EDIT: FORMATTING

    Read the article

  • Issue with a secure login - Why am I being redirected to the insecure login?

    - by mstrmrvls
    Im having some issues getting a website working at my place of work. The issue was rasised when a "double login" occurred from the secure login site. The second login was actually being prompted by the HTTP domain and not HTTPS. In essence the situation is like this: The user navigates to https://mysite.com/something The login prompt pops up Enter username and password The user is presented with ANOTHER login prompt (IE will say its insecure, and the address bar reflects that) If the user puts in their password the insecure one, they will login to the insecure site. if they hit cancel it will present them with a 401 page Navigating back to https://somesite.com/something will by pass the login prompt and log them in to the secure site automatically (cookie maybe) I'm a bit confused to why the user isnt being logged in properly the first time (redirected to non-ssl) but any consecutive login will be okay? I've been trying to use fiddler to see what is happening after the user puts in their password the first time and trying to get fiddler to automatically login to the site (with no luck) I believe the website in question is using Basic Digest authentication. Thanks for any help

    Read the article

  • Two network adapters in one WindowsXP PC, how to make them work?

    - by Deele
    I have a need to set up network so I can use two ethernet cards inside one Windows (Windows XP SP2) based PC, one for internet connection, second, for internal LAN. How should I configure each NIC, with what IP's, subnet masks and gateways, so I can use inernet on my PC and get in touch with devices on my LAN? I have found that there are some sort of re routing nessesary inside my PC, but how does it work? I have already set up some configuration already, but I can't use it together with PC #1 NIC #1 connected. I need to disconnect, to access NIC WEB interface. Current configuration: Switch #1 and PC #1 NAS #2 are gigabit one's, so I could access NAS with gigabit speed. PC #1 NIC #1 IP XX.XXX.162.106 SN 255.255.255.248 GW XX.XXX.162.105 PC #1 NIC #2 IP 10.0.0.1 SN 255.255.0.0 GW 0.0.0.0 NAS #1 NIC #1 IP 10.0.0.12 SN 255.255.0.0 GW 0.0.0.0 My question is - what exact configuration should I do for every NIC in this LAN, so it would work? Is it possible to achieve internet access for Laptop, that is inside that NIC #2 LAN (should I just set up basic ICS)?

    Read the article

  • Query specific nameserver for a particular domain upon VPN connect

    - by MT
    Some background: I have a work laptop with Ubuntu 9.10 on it. I have a small network at home where I've been running some basic services (for myself/my family) for 10 some years. In my home network there is a nameserver (Fedora) running Bind 9 with two "views". One view is the "outside" view and it provides name resolution (to the Internet at large) for email, a wiki, and a couple of blogs. The "inside" view provides name resolution (to the internal RFC1918 addresses of theses servers) as well as all the inside hosts, network equipment, ...etc. I connect with an openvpn client to my home network from outside (such as work). What I'd like to be able to do is resolve names on my internal network across this VPN (so I get the RFC1918 "inside" responses) without fully changing my resolver to the DNS server at my hose. For example, if I connect to the VPN from work, I can change my resolver (by editing resolv.conf) to the DNS server at my house (across the VPN) and then successfully resolve all of the inside DNS names on my home network. The issue I have with this is that now I'm no longer able to resolve "inside" names provided by my work's DNS servers (because I'm using my home DNS server). Alternatively, I can connect to the VPN and access my home severs via IP addresses directly, but this is inconvenient and causes issues with Apache name-based hosting (among other things). In the end, the effect I'm trying to achieve is as follows: When I connect to the VPN I automatically start sending DNS requests for *.myhomedomain.com to my home nameserver, but any other requests continue to go the the nameserver I was using before (the one I received on my company LAN via DHCP). When I disconnect the VPN, requests for *.myhomedomain.com go back to the local LAN DNS server (e.g. all requests are going there now). I'm looking for suggestion at to how this can be accomplished.

    Read the article

  • HAProxy appsession vs cookie precedence

    - by user1139473
    I am trying to find the best solution for balancing and keeping persistence on our application behind HAProxy. Here is our basic configuration: https://gist.github.com/endzyme/1804046b23c37beba520 After playing around with taking members down and up and also reloading the haproxy (with -sf) I have noticed that appsession isn't 100% effective, it would appear that sometimes it doesn't always 'request-learn'. I also tried to add a cookie JSESSION prefix to balance in case request-learn didn't take. Unfortunately it would present scenarios where the prefix would list svr2 but it was balanced to a different server. I am assuming it's because the appsession table takes first then sticks on that before using the cookie parameter. I have not tested with using cookie as an inserted option (not prefix on existing cookie) but I am thinking it would yield similar results. My question is: Which one is checked first, appsession or cookie, and is it an immediate catch after it reads the first one, or a fall through? Also as a follow up - is it not recommended to use both in the same backend? Cookie as I understand takes less memory resources, is agnostic to reloads and has way better reliability of persistence. Appsession I assume takes less cpu resource, since it's reading not writing. (Bonus Question: is there a way to inspect appsession/cookie table map? socket show table doesn't show anything except stick-tables) Many thanks in advance, -Nick

    Read the article

  • Performance-optimizing Oracle 10g on a server that is also a Tomcat JSP app server?

    - by PKHunter
    I have inherited a simple RedHat 5 - 64bit platform. It has SCSI disks on RAID1, with 16GB of RAM. Double Core CPU. Oracle 10g, Release 2. This would be a decent platform for running the DB only, perhaps, but the same server in an "A-A mode" clustering (very simple) also runs Tomcat and there are several Java servlets running on this. Sadly there is no caching platform etc. We only use an external CDN for some html caching. I am personally more familiar with web environments on the LAMPP platform (apache, php, mysql, postgresql). PROBLEM: Because the server has both Tomcat JSP/Java and Oracle 10g running on the same server, with no caching, I have some issues of the server going down. Often, sadly. QUESTION: What are my options in terms of improving performance of all these different apps? Connection Pooling? Example, in Postgresql world we have PgBouncer, which really helps things. Does Oracle have something similar? Or is there a famous Java-based external pooler that people use in production environments? (I'm not familiar with Java) Any "SQL cache" as in the MySQL and Postgresql world? Any other kind of application cache, as "APC" or "eAccelarator" in the PHP world? The "OSCache" stuff from the Java world (JSP thingie I found on Google: http://onjava.com/pub/a/onjava/2005/01/05/jspcache.html?page=2) ... What else? Sorry if this is a noob question. I have googled and googled, but problem is I don't know what to google for, other than the broad general concepts above. So if not full answers, I would even appreciate basic pointers and I am happy to JFGI myself. Thanks!

    Read the article

  • Why am I getting a Sharepoint error on a simple "hello world" web page?

    - by Fetchez la vache
    I've been granted admin access to an internal IIS server on which I need to set up a web site. Before doing anything technical I wanted to ensure that I could access the server, but when attempting to access a simple page (that does not refer to Sharepoint) at http://localhost/index.html when logged onto the server directly, I am getting Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not load file or assembly 'Microsoft.SharePoint' or one of its dependencies. The system cannot find the file specified. Source Error: Line 1: <%@ Assembly Name="Microsoft.SharePoint"%><%@ Application Language="C#" Inherits="Microsoft.SharePoint.ApplicationRuntime.SPHttpApplication" %> Source File: /global.asax Line: 1 Assembly Load Trace: The following information can be helpful to determine why the assembly 'Microsoft.SharePoint' could not be loaded. WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog]. -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.5456; ASP.NET Version:2.0.50727.5456 To be quite honest I know zip about Sharepoint, so why am I getting a sharepoint error on a basic "hello world" html page? Cheers :) Update: I've since supposedly uninstalled Sharepoint, but am still getting this error. Any ideas welcome!

    Read the article

  • Windows takes a very long time to shut down even in safe mode

    - by user1526247
    On Windows 7 the computer freezes for about 5 minutes once it gets to "Shutting down...". I can't remember when it started happening. I just lived with it for a while. The first thing I tried was a full scan using Microsoft Security Essentials. This did not solve the problem. I then went into msconfig and turned off everything I could get away with in the startup and services tabs. This did not solve the problem. I then uninstalled every program on this computer save the most basic programs. This did not solve the problem (did not uninstall drivers or catalyst). I then went through and turned off every single service and did a reboot. This did not solve the problem. I then booted into safe mode and just tried shutting it down. The problem even happens in safe mode. I have tried examining the event logs but with no success. They just say things like "blah blah has entered the stopped state" with no real clues about what program is causing me all this grief. *it may be worth noting that Ubuntu is installed on the same computer and the ubuntu boot loader is the one being used.

    Read the article

  • How to fix missing RAID1 drive

    - by Sodved
    I had to do some fiddling about with my cables inside thebox and now I am getting a "Critical Error" about the RAID disks during startup. I have a gigabyte GA-MA770T-UD3 motherboard. Aparently the RAID controller is an AMD SB710 chip. I'm pretty sure I know what happenned. The first time I rebooted I had forgotten the power cable on one of the disks in the RAID1 (mirror) and let it boot up. So I shut down and put the power back in. So now when it boots up I go into the RAID admin interface (between the BIOS screen and the OS loading): it shows the RAID1 as in error the logical device has one disk and says the other is disconnected or missing the other physical disk shows up as a single disk If I boot to the OS (Windows 7 32 bit) the data all seems to be there. If I go into computer management it says my partition is on a disk and working OK. But the other disk is offline because: "The disk is offline because it has a signature collision with another disk that is online" So I am guessing because I STUPIDLY booted up with only one of the disks powered on, the other disk fell out of synch with the mirror and so now cannot rejoin the mirror. How do I fix this? I want to get the RAID1 mirror working again. There does not appear to be any "Repair" option in the basic RAID admin tool which I get into during startup before the OS boots. I have not made any explicit changes to the online one (but I guess the OS has probably written some admin data).

    Read the article

  • Router as primary DNS server, Server as alternate? (or vice versa)

    - by Jakobud
    We have a very small business network, with a typical cable modem hooked into a DD-WRT router. We also run a basic CentOS server that does a variety of things, including acting as the primary DNS server for the office. The reason we need an internal DNS server is because we do a lot of internal web development and use the DNS server to add/remove various local network URLs for internal website testing (like www.testsite.com.local). It's very important for us to be able to add/remove URL aliases easily to the DNS. The problem with this setup is that if we ever need to restart the CentOS server or take it offline for upgrades or whatever, then internet access for all computers on the network is lost. That's because each computer relies on that DNS server to access the Internet I guess? The router is online all the time and very very rarely has to be restarted. It would be nice if we could setup my router to be the primary DNS server but still be running DNS on my server. So we could still add my local testing website URLs to the DNS server in CentOS, but be able to also take down the CentOS server without loosing Internet access on the network. How would this be setup? Would I simply need to add both router + server IP addresses to each computer's IP settings? Is the router primary DNS and server secondary DNS server? Or vice versa? Or can one of the two serve as a fallback for the other? What (if anything) needs to be configured on both the router and server in order for them to recognize that the other DNS server exists on the network? Does anyone have any newb-friendly resources for setting up something like this?

    Read the article

  • How to create a WHM/cPanel account, without creating a new sub-domain?

    - by Cyclops
    I have a basic VPS (full root access), with WHM/cPanel, and am learning the ropes. I'm trying to create a new account for an existing domain (mysite.com), and so far WHM won't let me - it either wants a sub-domain or fake domain, but won't allow two accounts for one domain. In the beginning, there was only the root account, and it wouldn't let me login to cPanel - a quick chat with tech support, and I am informed that I need to create a second account, which I did. So now I have an account, call it ns1me, for domain mysite.com. Now I want to create a django account. I go through the same process, but WHM won't allow me to use mysite.com as the domain for django. The docs recommend a sub-domain, so I fill the box in with django.mysite.com. I then realize that has actually created a sub-domain - going to django.mysite.com shows me its home directory, along with helpful information about what version of Apache, Python, and other mods its running (thanks, Apache). I really don't want a sub-domain, so that's out. Another chat with tech support, and they recommend a fake domain name, as it won't create anything. Sure enough, using a domain of djangomysite.com works, and WHM allows me to create a django account. But of course, I can't send email to [email protected] (where I could to [email protected]). What I want, is to be able to create a second account, associated with mysite.com (so I can run cPanel logged in as django, send email to [email protected], etc) - without creating a whole new sub-domain, or fake domain.

    Read the article

  • query keepalived

    - by tdimmig
    *Note: I have trouble deciding what should go in serverfault and what should go in superuser, if some kindly admin decides this is in the wrong place please move it for me - many thanks. I am implementing a basic HA system with keepalived. I only want to be notified of the failover in the case of hardware failure. I do, however, have the servers switch roles periodically. I have a track_script running on the backup that will vary it's return between 0 and 1 on an interval (once a week, once a month, whatever). Upon returning 0, the priority is raised above that of the master, upon returning 1 the priority is lowered again. This way they trade places on the configured interval. The question: What can I do to tell the difference between a switch caused by my script, and a switch caused because one of the servers died? I certainly want to be notified when there is an actual problem, but not every time the servers change places because of the script. I see that version 1.2.7 has snmp support and I may be able to use it to get some information that could tell me one way or another, but to be honest I've never used snmp before and I don't know how to get the information I want with it (my Google foo failed me).

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >