Search Results

Search found 89025 results on 3561 pages for 'spring dm server'.

Page 1326/3561 | < Previous Page | 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333  | Next Page >

  • Concurrent users with Quickbooks?

    - by airietis
    I work in a company with 3 people who regularly use the same Quickbooks file. However, they work remotely on different networks. I need to implement a solution that allows all three of us to access Quickbooks at the same time remotely (and each make changes at the same time). We have a spare desktop PC that can be utilized as a server. So, my question is: what is the cheapest and most hassle-free solution to solving this problem? I've considered using application cloud hosting, however, it is very expensive ($40 per user a month) and we are on a tight budget. Is it possible to install Quickbooks on my own server, and have them connect to it remotely? If so, what is the best way to accomplish this? Remote desktop protocol? Or is there a built in feature for this with Quickbooks Premier 2013? EDIT: As MDMarra mentioned, I am looking for a solution that offers true simultaneous access. Will using a dedicated server and having users connect to a VPN be a viable solution?

    Read the article

  • Open ports broken from internal network

    - by ksvi
    Quick summary: Forwarded port works from the outside world, but from the internal network using the external IP the connection is refused. This is a simplified situation to make the explanation easier: I have a computer that is running a service on port 12345. This computer has an internal IP 192.168.1.100 and is connected directly to a modem/router which has internal IP 192.168.1.1 and external (public, static) IP 1.2.3.4. (The router is TP-LINK TD-w8960N) I have set up port forwarding (virtual server) at port 12345 to go to port 12345 at 192.168.1.100. If I run telnet 192.168.1.100 12345 from the same computer everything works. But running telnet 1.2.3.4 12345 says connection refused. If I do this on another computer (on the same internal network, connected to the router) the same thing happens. This would seem like the port forwarding is not working. However... If I run a online port checking service on my external IP and the service port it says the port is open and I can see the remote server connecting and immediately closing connection. And using another computer that is connected to the internet using a mobile connection I can also use telnet 1.2.3.4 12345 and I get a working connection. So the port forwarding seems to be working, however using external IP from the internal network doesn't. I have no idea what can be causing this, since another setup very much like this (different router) works for me. I can access a service running on a server from inside the network both through the internal and external IP.

    Read the article

  • Multiple SSL certificates on Apache using multiple public IPs - not working

    - by St. Even
    I need configure multiple SSL certificates on a single Apache server. I already know that I need multiple external IP addresses as I cannot use SNI (only running Apache 2.2.3 on this server). I assumed that I had everything configured correctly, unfortunately things are not working as they should (or maybe I should say, as I expected them to work)... In my httpd.conf I have: NameVirtualHost *:80 NameVirtualHost *:443 Lets say my public IP is 12.0.0.1 and my private IP is 192.168.0.1. When I use the public IP in my vhost my default website is being shown instead the one defined in my vhost, e.g.: <VirtualHost 12.0.0.1:443> ServerAdmin [email protected] ServerName blablabla.site.com DocumentRoot /data/sites/blablabla.site.com ErrorLog /data/sites/blablabla.site.com-error.log #CustomLog /data/sites/blablabla.site.com-access.log common SSLEngine On SSLCertificateFile /etc/httpd/conf/ssl/blablabla.site.com.crt SSLCertificateKeyFile /etc/httpd/conf/ssl/blablabla.site.com.key SSLCertificateChainFile /etc/httpd/conf/ssl/blablabla.site.com.ca-bundle <Location /> SSLRequireSSL On SSLVerifyDepth 1 SSLOptions +StdEnvVars +StrictRequire </Location> </VirtualHost> When I use the private IP in my vhost everything works as it should (the website defined in my vhost is being shown), e.g.: <VirtualHost 192.168.0.1:443> ...same as above... </VirtualHost> My server is listening on all interfaces: [root@grbictwebp02 httpd]# netstat -tulpn | grep :443 tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 5585/httpd What am I doing wrong? If I cannot get this to work I cannot continue to add the second SSL certificate on the other public IP... If more information is required just let me know!

    Read the article

  • Calculating memory footprints using /proc/sysvipc/shm

    - by MarkTeehan
    This is for a SLES 10 database server. One of my servers runs three databases and three app servers; I am analyzing how their shared memory segments grow and shrink to avoid intermittent out-of-memory scenarios. "Top" is hot helpful for this since its calculations for RES and VIRT are inconsistent. I am doing this by matching up the contents of /proc/sysvipc/shm with memory usage reported by the database admin console. I do this by totaling up saving the contents of /proc/sysvipc/shm and then total up "bytes" for all of the segments for the offending userid. This is a large server with hundreds of segments and tens (or hundreds) of GB of allocated memory per userid. However it doesn't match up - the database management software claims to be using around 25% more memory than the total I calculate. Negligible swap space is in use, so I am ignoring that. I am running it as root so I am sure I see all shared memory segments. My question is : is all (significant) allocated memory recorded in /proc/sysvipc/shm, or is this only shared memory (*and not "un-shared" memory?). If this is incorrect, what is the correct way to calculate out the total allocated memory for each userid? Also: I believe doing a 'cat' on this file locks server IPC. I check it every 5 seconds - is it likely that this frequency could be problematic? Thanks! Mark Teehan Singapore

    Read the article

  • using nginx with proxy_pass on a subdomain

    - by marcus3006
    a have a rails app that should listen on the subdomain redmine.example.com (using proxy_pass). all other requests for *.example.com should just redirect to a normal index.html. Here is my configuration: server { server_name www.example.com example.com; root /home/deploy/static/example; } upstream redmine { server unix:/tmp/redmine.socket fail_timeout=0; } server { # you could put a list of other domain names this application answers server_name redmine.example.com; root /home/deploy/rails/redmine/public; access_log /var/log/nginx/redmine_access.log; rewrite_log on; location * { proxy_pass http://redmine; } location ~ ^/(assets)/ { root /home/deploy/rails/redmine/public; gzip_static on; # to serve pre-gzipped version expires max; add_header Cache-Control public; } } anyone knows what's going wrong here? requests to example.com and www.example.com are handled correctly. when i try to acces redmine.example.com = "couldn't resolve host"

    Read the article

  • postfix email gateway

    - by k-h
    I am setting up a postfix email gateway. It will not hold any mail but will accept email for my domain and forward it to another internal mailserver and relay mail out from the internal server. One of the main problems is that I am working on a live running system and this will be an upgrade so I am using a test domain which I will change at some point to the real domain. I tried various methods but found the simplest way (that worked) was to use a script to create an aliases file (from ldap entries). There are various problems with this method. The main one being that the entries can't be of the simple form [email protected] because the gateway doesn't know where to send them. They have to be of the form: [email protected]. What I would like doesn't seem hard but I can't get my head around the postfix documentation. There seem to be various ways but none of them seem to work. Most of the examples I have found on the web assume the mail is going to end up on the server. I want a list of users somewhere, preferably of the form: user1, user2, etc rather than [email protected] (I can easily generate this list) and I would like postfix to forward all email to example.com to a particular server: ie realmailserver.example.com. Can anyone suggest clues as to how I might do this?

    Read the article

  • How much free memory should I have on my webserver?

    - by neanderslob
    I have a webserver that's currently hosting two Wordpress sites and some java-based collaboration software. The server has 2G of memory and is currently using about 1.8G of the available memory. Right now what's on here is pretty much a pilot project that's getting negligible traffic so I think it's pretty clear that I'll be needing more memory. I was wondering, if I was to release it, how I might anticipate my memory needs based on the traffic it gets. I've poked around on Google and what I've found has been a bit tenuous. Is there a good heuristic that one should use when calculating memory demands as a function of the base (no traffic) load on the server? For reference, the output of free -m can be seen below: total used free shared buffers cached Mem: 2048 1832 215 0 0 0 -/+ buffers/cache: 1832 215 Swap: 0 0 0 To me this looks like actual memory used and isn't an illusion due to caching or anything else. I figure the demands of my collaboration software will have to be experimentally tested so here's free -m without that software running: total used free shared buffers cached Mem: 2048 1109 938 0 0 0 -/+ buffers/cache: 1109 938 Swap: 0 0 0 My plan B to figure this out is to add a bunch of swap space to the server, give it some traffic and adjust according the the amount that swap gets used. I was just wondering if anyone had a good rule of thumb to estimate how much memory I should plan on in advance...or if what I'm thinking is nuts. Many thanks in advance (I'm really quite new to this).

    Read the article

  • SMF restarting service whenever there's output?

    - by Phillip Oldham
    I'm trying to add a custom service to SMF's configuration, which seems successful in that the service starts and there is a log file, but therein lies the problem; the service, on start-up, prints some logging messages to the stderr. It seems that SMF is seeing those messages and, believing them to be errors, restarts the service, giving up after a number of tries and leaving the service off. Here's part of the log output: [ Mar 30 14:59:54 Enabled. ] [ Mar 30 14:59:54 Executing start method ("java server.CustomServer"). ] Starting server... [ Mar 30 15:00:04 Method or service exit timed out. Killing contract 107. ] Running the server directly on the commandline is fine, and AFACS there are no errors being encountered during startup, other than the output. What would be the best way to manage this service with SMF? The logging is needed for diagnosing problems, and would be problematic to disable. Is it possible to configure this service to only restart if the service exists?

    Read the article

  • Oracle Linux screen freezes during installation

    - by Fearless
    I was installing Oracle Linux 6.4 on a server, and the screen suddenly froze. Here were the previous steps: I put in the disk, clicked install, checked the disk (no errors), did pre-install setup (clock, root password, host+domain name, etc.), configured two 40GB hard drives in a RAID1 array (no swap, 3100mb encrypted raid partitions, ~100mb ext4 partition mounting to /boot, encrypted ext4 RAID device with mounting to /), selected packages, hit continue. The system did its short preinstall processes, then when to the main installation screen with the long status bar. The installer proceeded like always, but around package 250 out of ~1000, the screen suddenly went black with a text cursor in the upper left corner of the screen and the mouse cursor in its previous place. Neither cursor moved and the only thing that triggered a response was a ctrl-alt-delete that rebooted it. I have run this in VMs before without this issue. Memtest hasn't reported anything, and the media check went smoothly. The machine has supported Ubuntu server without issues before. Any ideas? I have tried booting after that, but the grub bootloader tries to find fd0 for some reason (I have no idea why it would search for the floppy disk). UPDATE My server successfully installed, but won't boot up. I think that, for some reason, it is still using the old bootloader from the previous installation. Any ideas on how to fix that?

    Read the article

  • Repeated installation of malicious software to do outbound DDOS attack [duplicate]

    - by user224294
    This question already has an answer here: How do I deal with a compromised server? 12 answers We have a Ubuntu Vitual Private Server hosted by a Canadian company. Out VPS was affected to do "outbound DDOS attack" as reported by server security team. There are 4 files in /boot looks like iptable, please note that the capital letter "I","L". VPS:/boot# ls -lha total 1.8M drwx------ 2 root root 4.0K Jun 3 09:25 . drwxr-xr-x 22 root root 4.0K Jun 3 09:25 .. -r----x--x 1 root root 1.1M Jun 3 09:25 .IptabLes -r----x--x 1 root root 706K Jun 3 09:23 .IptabLex -r----x--x 1 root root 33 Jun 3 09:25 IptabLes -r----x--x 1 root root 33 Jun 3 09:23 IptabLex We deleted them. But after a few hours, they appeared again and the attack resumed. We deleted them again. They resurfaced again. So on and so forth. So finally we have to disable our VPS. Please let us know how can we find the malicious script somewhere in the VPS, which can automatically install such attcking software? Thanks.

    Read the article

  • Confused with creating an ODBC connection, apparently I have two separate odbcad32.exe files?

    - by Hoser
    Alright, this is my first time working with this so forgive me if I'm a little confusing or vague. I have a server with Windows Server 2008 Standard without Hyper-v (6.0, Build 6002). I'm running a small website off this server and using a Microsoft Access database to store some information coming in through the website. I'm sure the PHP I have written to open the ODBC connection is correct as it has worked for me when I created this website in a testing environment on a laptop. My current issue now is that it seems like I have two different odbcad32.exe's, and one doesn't appear to have a driver for a .accdb file, and only a .mdb file. The other has a driver for both. The first one I speak of has a driver titled 'Driver do Microsoft Access (.mdb)', the second one has a driver titled 'Microsoft Access Driver (.mdb, .accdb)'. I access the first odbcad32.exe by going to C:\Windows\SysWOW64\odbcad32.exe, and then the one that seems to have the driver I need I go to Control Panel-Administrative Tools-Data Sources(ODBC) and simply create a new connection in the System DNS tab. Whenever I make changes to the one that I access through the Control Panel, I see no changes, however if I use the odbcad32.exe file in SysWOW64 I do get some changes in the errors that come back to me. The main difference I noticed is that when I set up an ODBC connection with the Control Panel method it said it simply couldn't find the ODBC connection, but when I made a .mdb connection in the SysWOW64 one (and pointed it to a .accdb file) it says Cannot open database '(unknown)'. It may not be a database that your application recognizes, or the file may be corrupt. Which makes it seem like it is this odbcad32.exe version in SySWOW64 that is being recognized as the 'correct' one. Is there any way to fix this? I've tried to be as thorough as possible but if I've been confusing or left anything out let me know.

    Read the article

  • HSphere - Only sees Apache 2 Test Page after forced shutdown?

    - by Darkwoof
    Hi, I have a dedicated server running on a Dell PowerEdge 850 with CentOS 4.4 and HSphere 3.0 Patch 6 colocated at a datacenter. Last night my hosting company had to schedule a change in the power bar, and I gave the go ahead for them to shut down the server and bring it up when they are done. Since they do not have admin access to the machine, I suppose they did a forced shutdown. When the machine was brought up, I found that all my domains (and sub-domains) are now pointing to an "Apache 2 Test Page" instead of the pre-configured sites that were running prior to the shutdown. This apparently only affects the standard sites running on port 80 - my Webmin instance running at port 1000 is still accessible for example, as well as my HSphere control panel running at port 8080. I've checked the config settings using the HSphere UI for each of the sites, and didn't find anything wrong. I've also tried rebooting the server via SSH, which does not rectify the problem. I've previously done reboots with no issues; the sites would just come right back up when its done, but not this time. I'm guessing some configuration file got corrupted or overwritten this time? Anyone with experience with HSphere and can provide some advice on what's happened and how to solve it? Thanks. (I do not have an active support agreeement for HSphere since Parallels took over and increased the min. license to 200. I only had 25 license for use by family and friends.) Thanks in advance.

    Read the article

  • What causes a switch port to receive data not destined for it?

    - by user1693454
    We are having an intermittent fault which is effecting one of our control systems on one of our HP Procurve switches. For some reason, this PLC (10mbit port - 192.168.6.56) which is attached directly to the HP Switch intermittantly start's receiving data which is not destined for it. The data is being sent from a Thecus NAS with latest firmware (192.168.6.218) to a physical IBM Server running Win2003R2 and SAP (192.168.6.225). The problem does not just send to this server, it has been to other physical servers in the past too, but always from the Thecus NAS. I am using a monitor port to wireshark what is going in/out of the PLC - normally there would be about 1mb in/out per 2 or 3 minutes - only a server asking the state of the coils. When the problem occurs, there is a flood of data being put onto the PLC line - in this captured instance, about 67mb in less than a minute. Due to this, there is no way that the PLC can be queried as the port is effectively DOSed, in turn killing part of our factory. I know that having Production on the same vlan as IT is not a good idea - I agree, however it cannot be changed at the moment (will have to wait 3 months), as well as the problem has only started happening in the last 3 months. Here is a screen cap of one of the packets being sent from the Thecus NAS which was captured from the PLC port on the HP Switch: And there are over 700 of these in this one 1024kb file. If anyone has any idea on what could be going on, some help would be greatly appreciated. If you need to know anything more, let me know! Cheers!

    Read the article

  • I can't change mysql port (5.6.12) changing the lines of my.ini (windows 8)

    - by videador
    I was trying to change the port of my mysql server in my local machine but i can't. The version of mysql is 5.6.12, is an installation from wamp and I am on Windows 8. I change these lines in my my.ini file located in (C:\wamp\bin\mysql\mysql5.6.12). [client] #password = your_password port = 3307 socket = /tmp/mysql.sock [wampmysqld] port = 3307 socket = /tmp/mysql.sock key_buffer = 16M max_allowed_packet = 1M The previous values were 3306. Ok then I've reset the server installed, but it doesn't works, the mysql server is still running on 3306. Then, I rename the path of the services with this, to make sure that the my.ini is read by the mysql instance. c:\wamp\bin\mysql\mysql5.6.12\bin\mysqld.exe --defaults-file="C:\wamp\bin\mysql\mysql5.6.12\my.ini" wampmysqld But nothing, it stil doesn't works. My last bullet was to copy the content of my.ini to a file my-default.ini (a file that is placed in C:\wamp\bin\mysql\mysql5.6.12\ and that I don't know what is its mission). However it still doesn't work and the port is still 3306.

    Read the article

  • Reading log files from web application

    - by Egorinsk
    Hi! I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

  • Setting up DNS using BIND

    - by dupdupdup
    i have troubles setting up my db files. Please kindly point me in the right direction! i need to define a nameserver that manage a domain example.org.au then i need it to have two records. one called server which is the ip address of current machine the other called www where www.example.org.au will be pointed to another ip address. i cant seem to get my system to work. This is my db.example.org.au file example.org.au. IN SOA server.example.org.au. ( 1; 3; 1h; 1w; 1h ) ; ; ;Host addresses localhost.example.org.au IN A 127.0.0.1 www.example.org.au. IN A 192.168.1.200 ; another virtual machine server.example.org.au IN A 192.168.1.199 ; current virtual machine If possible Please correct my errors! thanks! Any good guides out there? Thanks in advance ! :)

    Read the article

  • Ubuntu 12.04 can't boot after installing with software RAID 1

    - by Bill
    I've been trying to install Ubuntu with software RAID on my server and there is obviously something that I don't understand about the process. This is the guide that I followed: https://help.ubuntu.com/11.04/serverguide/advanced-installation.html I have two identical 1 TB disks in my server. I went through the initial install process and manually set up my partitions. On each disk I set up: (1) 100 MB partition for EFI boot (I didn't originally have this but added it based on a forum post I found after my original install failed to boot, I ended up with EFIboot since that was what the 'guided partitioning' decided to do) (1) 970 MB partition for / (1) 30 MB partition for swap I then created new RAID 1 disks combining the two partitions, one from each disk, such that each partition is mirrored. I then configured their usage as stated above. After saving the configuration I said yes to boot in a degraded state. The rest of the setup went normally, no errors of any kind. I saw GRUB being installed and again no errors. However, after rebooting the server I get the dreaded 'Insert boot media' and nothing happens. I loaded up the recovery disk and the mdadm configuration looks correct. md0 is my EFIBoot partition md1 is my \ partition using ext4 md2 is my swap partition Running file -s /dev/md0 doesn't indicate that GRUB is there and so I attempted to reinstall GRUB using the recovery disk. I selected the md0 disk and it appeared to install just fine. Running file -s /dev/md1 shows the error needs journal recovery, I'm not sure if that's related or not or how to fix that. Rebooting gives me the same problem, no boot media found. I've searched around the internet but can't figure out what to do next or more importantly how to troubleshoot what exactly is going wrong. Thanks!

    Read the article

  • Designing a persistent asynchronous TCP protocol

    - by dogglebones
    I have got a collection of web sites that need to send time-sensitive messages to host machines all over my metro area, each on its own generally dynamic IP. Until now, I have been doing this the way of the script kiddie: Each host machine runs an (s)FTP server, or an HTTP(s) server, and correspondingly has a certain port opened up by its gateway. Each host machine runs a program that watches a certain folder and automatically opens or prints or exec()s when a new file of a given extension shows up. Dynamic IP addresses are accommodated using a dynamic DNS service. Each web site does cURL or fsockopen or whatever and communicates directly with its recipient as-needed. This approach has been suprisingly reliable, however obvious issues have come up and the situation needs to be addressed. As stated, these messages are time-sensitive and failures need to be detected within minutes of submission by end-users. What I'm doing is building a messaging protocol. It will run on a machine and connection in my control. As far as the service is concerned, there is no distinction between web site and host machine -- there is only one device sending a message to another device. So that's where I'm at right now. I've got a skeleton server and a skeleton client. They can negotiate high-quality authentication and encryption. The (TCP) connection is persistent and asynchronous, and can handle delimited (i.e., read until \r\n or whatever) as well as length-prefixed (i.e., read exactly n bytes) messages. Unless somebody gives me a better idea, I think I'll handle messages as byte arrays. So I'm looking for suggestions on how to model the protocol itself -- at the application level. I'll mostly be transferring XML and DLM type files, as well as control messages for things like "handshake" and "is so-and-so online?" and so forth. Is there anything really stupid in my train of thought? Or anything I should read about before I get started? Stuff like that -- please and thanks.

    Read the article

  • Why am I getting a Sharepoint error on a simple "hello world" web page?

    - by Fetchez la vache
    I've been granted admin access to an internal IIS server on which I need to set up a web site. Before doing anything technical I wanted to ensure that I could access the server, but when attempting to access a simple page (that does not refer to Sharepoint) at http://localhost/index.html when logged onto the server directly, I am getting Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not load file or assembly 'Microsoft.SharePoint' or one of its dependencies. The system cannot find the file specified. Source Error: Line 1: <%@ Assembly Name="Microsoft.SharePoint"%><%@ Application Language="C#" Inherits="Microsoft.SharePoint.ApplicationRuntime.SPHttpApplication" %> Source File: /global.asax Line: 1 Assembly Load Trace: The following information can be helpful to determine why the assembly 'Microsoft.SharePoint' could not be loaded. WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog]. -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.5456; ASP.NET Version:2.0.50727.5456 To be quite honest I know zip about Sharepoint, so why am I getting a sharepoint error on a basic "hello world" html page? Cheers :) Update: I've since supposedly uninstalled Sharepoint, but am still getting this error. Any ideas welcome!

    Read the article

  • Can a VM perform better when only two cores instead of four cores are presented to it?

    - by arcain
    We had a VMWare VM at work with two cores allocated to it that ran a pretty heinous process in IIS. Under load the process was maxing out the CPU usage on both cores, so we asked our system engineers to present the other two cores of the physical processor to the VM. The engineer immediately said that this would not improve performance at all, but would make the VM perform worse. That statement didn't make much sense to me, and I'm wondering how what the engineer said could be true. Are there actually cases where four cores presented to a VM would cause worse performance than two cores on the same physical hardware? Let's assume an ideal situation where there's only one VM on the host server, so nothing is being shared with other OS instances. I believe the physical server had a single quad core processor, and was most likely hosting multiple VMs. I don't really know what version of ESX was running on the host, nor do I know with certainty what the physical processor config was, but from within the VM I had access to, I saw two 3.33 GHz AMD processors. In the end, I never got to test the engineer's assertion out because (while we were trying to get the VM upgraded) we were able to optimize the process and reduce it's CPU consumption, and 2) we ended up migrating to a different VM on another ESX server which had four cores presented to it.

    Read the article

  • Specific issue on data pump API in oracle

    - by Median Hilal
    I have a client/server architecture. Using an Oracle dbms on the database server side. I need to perform a user-triggered (from client side) backup of the database, where the best way to perform that is using a stored procedure on the server side which the client may call, as the client has no oracle tools to perform the backup. I've searched thorough inside available solutions and have found that using a stored procedure is the best way. Well, then I found that using oracle data pump API is the best way to use inside a PL/SQl stored procedure. My specific questions about the API are... I would like to ask about two issues ... ---- The first ----- the detach function to detach the handler, is it necessary to be used at the end of the procedure? and what if I don't use it? I read the Oracle documentation but I didn't get their point, they say it doesn't terminate the job but indicates that the user is not interested in it, an when I use detach at the end of my procedure the exported .dmp file disappears. ---- The second ----- to perform a user (client side) triggered back up as the modification are only to the data, I used TABLE parameter for the export operation. But the version parameter... what should it be? I also read the documentation but couldn't determine what I need (LATEST or COMPATIBLE) ? Thanks

    Read the article

  • nginx reverse proxy slows down my throughput by half

    - by Isaac A Mosquera
    I'm currently using nginx to proxy back to gunicorn with 8 workers. I'm using an amazon extra large instance with 4 virtual cores. When I connect to gunicorn directly I get about 10K requests/sec. When I serve a static file from nginx I get about 25 requests/sec. But when I place gunicorn behind nginx on the same physical server I get about 5K requests/sec. I understand there will be some latency from nginx, but I think there might be a problem since it's a 50% drops. Anybody heard of something similar? any help would be great! Here is the relevant nginx conf: worker_processes 4; worker_rlimit_nofile 30000; events { worker_connections 5120; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; } sites-enabled/default: upstream backend { server 127.0.0.1:8000; } server { server_name api.domain.com ; location / { proxy_pass http://backend; proxy_buffering off; } }

    Read the article

  • Apache 2 settings for high traffic website

    - by Harry
    I'm having problems with the load on my website. It's an amazon ec2 server with 15Gb ram and 4 CPUs behind an LB. apachetop says I'm getting around 80 reqs per second which seems really low for this kind of server and the load ( given by top ) is usually around 15 but does increase to about 150 in 24 hrs. I'm seeing about 100 active apache processes at any time. Apache is in prefork mode. Mysql is used very little on the server and there are almost no static files. Here are my Apache settings: Timeout 20 KeepAlive Off MaxKeepAliveRequests 0 KeepAliveTimeout 3 <IfModule mpm_prefork_module> StartServers 40 MinSpareServers 25 MaxSpareServers 40 ServerLimit 400 MaxClients 400 MaxRequestsPerChild 4 </IfModule> Can anyone advise on how to tweak the settings? Thanx! Edit: The config was gotten by trial and error. Any and I mean by a number, change to these lines make the load skyrocket in like 5 minutes. It literally jumps to like 200-300 in a matter of minutes. Especially MaxRequestsPerChild. I've tried with 10, 15, 100, 1000 and the load just skyrockets. About php - there are actually only a few php files which aren't really that expensive at all. They just spit some simple stuff out. If I turn on KeepAlive load also goes to space..

    Read the article

  • Very high CPU and low RAM usage - is it possible to place some of swap some of the CPU usage to the RAM (with CloudLinux LVE Manager installed)?

    - by Chriswede
    I had to install CloudLinux so that I could somewhat controle the CPU ussage and more importantly the Concurrent-Connections the Websites use. But as you can see the Server load is way to high and thats why some sites take up to 10 sec. to load! Server load 22.46 (8 CPUs) (!) Memory Used 36.32% (2,959,188 of 8,146,632) (ok) Swap Used 0.01% (132 of 2,104,504) (ok) Server: 8 x Intel(R) Xeon(R) CPU E31230 @ 3.20GHz Memory: 8143680k/9437184k available (2621k kernel code, 234872k reserved, 1403k data, 244k init) Linux Yesterday: Total of 214,514 Page-views (Awstat) Now my question: Can I shift some of the CPU usage to the RAM? Or what else could I do to make the sites run faster (websites are dynamic - so SQL heavy) Thanks top - 06:10:14 up 29 days, 20:37, 1 user, load average: 11.16, 13.19, 12.81 Tasks: 526 total, 1 running, 524 sleeping, 0 stopped, 1 zombie Cpu(s): 42.9%us, 21.4%sy, 0.0%ni, 33.7%id, 1.9%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8146632k total, 7427632k used, 719000k free, 131020k buffers Swap: 2104504k total, 132k used, 2104372k free, 4506644k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 318421 mysql 15 0 1315m 754m 4964 S 474.9 9.5 95300:17 mysqld 6928 root 10 -5 0 0 0 S 2.0 0.0 90:42.85 kondemand/3 476047 headus 17 0 172m 19m 10m S 1.7 0.2 0:00.05 php 476055 headus 18 0 172m 18m 9.9m S 1.7 0.2 0:00.05 php 476056 headus 15 0 172m 19m 10m S 1.7 0.2 0:00.05 php 476061 headus 18 0 172m 19m 10m S 1.7 0.2 0:00.05 php 6930 root 10 -5 0 0 0 S 1.3 0.0 161:48.12 kondemand/5 6931 root 10 -5 0 0 0 S 1.3 0.0 193:11.74 kondemand/6 476049 headus 17 0 172m 19m 10m S 1.3 0.2 0:00.04 php 476050 headus 15 0 172m 18m 9.9m S 1.3 0.2 0:00.04 php 476057 headus 17 0 172m 18m 9.9m S 1.3 0.2 0:00.04 php 6926 root 10 -5 0 0 0 S 1.0 0.0 90:13.88 kondemand/1 6932 root 10 -5 0 0 0 S 1.0 0.0 247:47.50 kondemand/7 476064 worldof 18 0 172m 19m 10m S 1.0 0.2 0:00.03 php 6927 root 10 -5 0 0 0 S 0.7 0.0 93:52.80 kondemand/2 6929 root 10 -5 0 0 0 S 0.3 0.0 161:54.38 kondemand/4 8459 root 15 0 103m 5576 1268 S 0.3 0.1 54:45.39 lvest

    Read the article

  • thought about shared storage (NFS, Lustre) [closed]

    - by user134880
    Possible Duplicate: Can you help me with my capacity planning? Now I habe small cluster with total of 8 nodes. 6 of them are computing nodes (apache and vmware) and 2 nodes are for storage. 2 storage nodes are identical. Each storage server is linux box with 8 x 1Tb WD RE4 in soft raid 10. 1st box is master and 2nd is slave. Data is mirrored with DRDB. We export NFSv4 shares to Apache (for document root) and iSCSI to Vmware. Now all is working pretty good and stable. But it will be soon time to upgrade our system. I have been thinking of Lustre. Does some one has any real experience with Lustre or NFS medium clusters? Will it be good idea just to upgrade server and change hdd's to 3Tb ? With NFS we will always have only 2 servers to maintain (one primary and one slave). Thanks. QUESTIONS: 1) Does some one used Lustre? In production? I have seen a lot of info about how it is hard to setup Lustre because you need to compile own kernel and patches. It's answers from newbies. Is there some one who has used Lustre for some period of time? 2) About disk upgrades - it's only description of strategy. I'm not asking if it is enough 3Tb or not. I just ask if it is right just to replace hdds instead of adding new server (like with Lustre) Thanks again.

    Read the article

< Previous Page | 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333  | Next Page >