Search Results

Search found 7957 results on 319 pages for 'production databases'.

Page 162/319 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • Obtain newer version of NetSNMP for CentOS 5

    - by jtnire
    I'm using CentOS 5. I have a need to use net-snmp version "net-snmp-utils-5.5-37.el6_2.1.x86_64" which is currently available in CentOS 6 but not in CentOS 5. The reason I need this version (or greater) is because there is a new supported option added to the config files that I need for my setup. It would be very much appreciated if someone would give me some steps to install this version (or greater) on my production CentOS 5 systems. Upgrading to CentOS 6 is currently not an option. Any help would be appreciated. Thanks

    Read the article

  • SQLAlchemy session management in long-running process

    - by codeape
    Scenario: A .NET-based application server (Wonderware IAS/System Platform) hosts automation objects that communicate with various equipment on the factory floor. CPython is hosted inside this application server (using Python for .NET). The automation objects have scripting functionality built-in (using a custom, .NET-based language). These scripts call Python functions. The Python functions are part of a system to track Work-In-Progress on the factory floor. The purpose of the system is to track the produced widgets along the process, ensure that the widgets go through the process in the correct order, and check that certain conditions are met along the process. The widget production history and widget state is stored in a relational database, this is where SQLAlchemy plays its part. For example, when a widget passes a scanner, the automation software triggers the following script (written in the application server's custom scripting language): ' wiget_id and scanner_id provided by automation object ' ExecFunction() takes care of calling a CPython function retval = ExecFunction("WidgetScanned", widget_id, scanner_id); ' if the python function raises an Exception, ErrorOccured will be true ' in this case, any errors should cause the production line to stop. if (retval.ErrorOccured) then ProductionLine.Running = False; InformationBoard.DisplayText = "ERROR: " + retval.Exception.Message; InformationBoard.SoundAlarm = True end if; The script calls the WidgetScanned python function: # pywip/functions.py from pywip.database import session from pywip.model import Widget, WidgetHistoryItem from pywip import validation, StatusMessage from datetime import datetime def WidgetScanned(widget_id, scanner_id): widget = session.query(Widget).get(widget_id) validation.validate_widget_passed_scanner(widget, scanner) # raises exception on error widget.history.append(WidgetHistoryItem(timestamp=datetime.now(), action=u"SCANNED", scanner_id=scanner_id)) widget.last_scanner = scanner_id widget.last_update = datetime.now() return StatusMessage("OK") # ... there are a dozen similar functions My question is: How do I best manage SQLAlchemy sessions in this scenario? The application server is a long-running process, typically running months between restarts. The application server is single-threaded. Currently, I do it the following way: I apply a decorator to the functions I make avaliable to the application server: # pywip/iasfunctions.py from pywip import functions def ias_session_handling(func): def _ias_session_handling(*args, **kwargs): try: retval = func(*args, **kwargs) session.commit() return retval except: session.rollback() raise return _ias_session_handling # ... actually I populate this module with decorated versions of all the functions in pywip.functions dynamically WidgetScanned = ias_session_handling(functions.WidgetScanned) Question: Is the decorator above suitable for handling sessions in a long-running process? Should I call session.remove()? The SQLAlchemy session object is a scoped session: # pywip/database.py from sqlalchemy.orm import scoped_session, sessionmaker session = scoped_session(sessionmaker()) I want to keep the session management out of the basic functions. For two reasons: There is another family of functions, sequence functions. The sequence functions call several of the basic functions. One sequence function should equal one database transaction. I need to be able to use the library from other environments. a) From a TurboGears web application. In that case, session management is done by TurboGears. b) From an IPython shell. In that case, commit/rollback will be explicit. (I am truly sorry for the long question. But I felt I needed to explain the scenario. Perhaps not necessary?)

    Read the article

  • Fedora vs Ubuntu to host Subversion and Bugzilla over Apache

    - by Tone
    I'm not interested in a flame war of Ubuntu vs Fedora vs whatever. What I am interested in is whether or not I should move my current Ubuntu server to Fedora. I have been able to get Subversion setup and hosted via Apache over https and it works quite well (I'm a .NET guy so this was all new to me). I'm having trouble though with installing Bugszilla - have run into some issues getting all the perl scripts to run successfully so my questions are: 1) Will Bugszilla will install easier on Fedora? Can I just install a package instead of having to download the tar.gz file and untar it, run perl scripts, etc. 2) Is Fedora considered to be a better production server system? I have no desire for a GUI, just need it to host Subversion, Bugzilla over Apache2, and act as a file and print server for my home network.

    Read the article

  • Percona MySQL 5.5 fails to start

    - by keymone
    trying to setup new server here but keep getting this in error log: mysqld_safe Starting mysqld daemon with databases from /data/mysql/myisam [Warning] Can't create test file /data/mysql/myisam/hostname.lower-test [Warning] Can't create test file /data/mysql/myisam/hostname.lower-test [Note] Flashcache bypass: disabled [Note] Flashcache setup error is : setmntent failed /usr/sbin/mysqld: File '/var/mysql/bin/bin-log.index' not found (Errcode: 13) [ERROR] Aborting [Note] /usr/sbin/mysqld: Shutdown complete mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended everything under /data/mysql (it's ibdata and myisam folders) is owned my mysql:mysql and has proper permissions same goes for folders with bin and relay logs under /var/mysql apparmor is purged from server any ideas? PS it seems like something else apart from apparmor is affecting permissions to access mysql files after i changed data directory to more default one - /var/lib/mysql and "Can't create test file" error is gone, but "'/var/mysql/bin/bin-log.index' not found (Errcode: 13)" is still there PPS so i installed apparmor back and added all folders to mysqld's profile and errors mentioned above are now gone(or mysql doesn't even get to that point now) what i have now is this: /usr/sbin/mysqld: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory banging my head against the wall.

    Read the article

  • MySQL Database Replication and Server Load

    - by Willy
    Hi Everyone, I have an online service with around 5000 MySQL databases. Now, I am interested in building a development area of the exactly same environment in my office, therefore, I am about to setup MySQL replication between my live MySQL server and development MySQL server. But my concern is the load which will occur on my live MySQL server once replication is started. Do you have any experience? Will this process cause extra load on my production server? Thanks, have a nice weekend.

    Read the article

  • Duplicate incoming TCP traffic on Debian Squeeze

    - by Erwan Queffélec
    I have to test a homebrew server that accepts a lot of incoming TCP traffic on a single port. The protocol is homebrew as well. For testing purposes, I'd like to send this traffic both : - to the production server (say, listening on port 12345) - to the test server (say, listening on port 23456) My clients apps are "dumb" : they never read data back, and the server never replies anyway, my server only accepts connections, and do statistical computations and store/forward/service both raw and computed data. Actually, client apps and hardware are so simple there is no way I can tell clients to send their stream on both servers... And using "fake" clients is not good enough. What could be the simplest solution ? I can of course write an intermediary app that just copy incoming data and send it back to the testing server, pretending to be the client. I have a single server running Squeeze and have total control over it. Thanks in advance for your replies.

    Read the article

  • AWS RDS MySQL remote connection extremely slow

    - by nute
    I have a site hosted on AWS EC2 (Elastic Beanstalk), with a MySQL database hosted on AWS RDS. Everything works fine on the production server, fast and all. However when I try to connect remotely from my local machine, it sometimes gets extremely slow (like 4 minutes to load the list of tables), or simply times out. I added my IP in the security group (which I did correctly, since it sometimes works). When it doesn't work, I at the same time check the prod server and it still looks good.

    Read the article

  • Oracle EE 11.2g: how to generate fresh new redo logs

    - by Aikanaro
    Hi, In the company I work for we are heavy users of vmware machines. Almost all our projects are developed inside a virtual environment up to the point where we have to deploy them into a production system. While in development, some colleagues of mine deleted the redo log files for Oracle in the hopes of gaining some free space. Now they are unable to start the database instance. Is there a way of generating a fresh new redo log so that the instance can be started? This is urgent and even though I'm currently googling for an answer I have yet to find it. Thanks in advance.

    Read the article

  • zabbix 2.2.1 no graphs in Web scenario

    - by Mick
    Hello for some time I have a problem with graphs in web scenarios on Zabbix 2.2.1, I put below the screen, this problem has appeared at every graph of web scenario This same scenario installed a second zabbixie that runs on my local virtual machine with zabbix. In my local machine all components of zabbix (server, frontend, agents), but in my production zabbix only zabbix frontend are separated from each other. Scenario for openerp ============================== Name: OpenERP Web Checks Application: New application: Authentication: Update interval (in sec): 60 Retries: 1 Agent: Internet Explorer 10.0 Steps: ============================== Name: OpenERP login page URL: http://openerp.test.com Post: Variables: Timeout: 15 Required string: Required status codes: 200 My zabbix server performance: Anybody have some idea how fix it ? Regards Mick

    Read the article

  • Migrating data from SQL Server 2000 to SQL Server 2005

    - by Muhammad Kashif Nadeem
    I have to migrate existing data which is in SQL Server 2000 to SQL Server 2005. The schema of two databases is different. For Example Locations table in SS2000 is split into two tables and has different columns. This is one time activity. After successful migration I don't need old db anymore. What is the best way to transfer data from one SQL Server to another having different schemas? I can write stored procedures to fetch data from SQL Server 2000 and insert/update tables in SQL Server 2005. What about SSIS? I don't have any experience with this and is this better to create package of SSIS because I don't need this again and need to learn it first. Thanks.

    Read the article

  • How to test TempDB performance?

    - by Matt Penner
    I'm getting some conflicting advice on how to best configure our SQL storage with our current SAN. I would like to do some of my own performance testing with a few different configurations. I looked at using SQLIOSim but it doesn't seem to simulate TempDB. Can anyone recommend a way to test data, log and TempDB performance? What about using a SQL profiler trace file from our production system? How would I use This to run against my test server? Thanks, Matt

    Read the article

  • Lot of FIN_WAIT2, CLOSE_WAIT , LAST_ACK and TIME_WAIT in Haproxy

    - by Tux
    We are running haproxy in production for around 10k+ concurrent users . But we are seeing lot of FIN_WAIT2, CLOSE_WAIT , LAST_ACK and TIME_WAIT in the netstat output. This output is on a 8G ubuntu-12.04 node. 8046 CLOSE_WAIT 1 CLOSING 1 established) 40869 ESTABLISHED 1212 FIN_WAIT1 7575 FIN_WAIT2 1 Foreign 2252 LAST_ACK 7 LISTEN 143 SYN_RECV 4920 TIME_WAIT Can someone please tell me what tweaking i need to do. Please note that all these connections are persistent connections . tcp_fin_timeout = 30 tcp_keepalive_time = 1800 Right now, the application is working fine. But wondering will be there any issues as we add more users to this haproxy node.

    Read the article

  • How can I see logs in a server after a kernel panic hang ?

    - by Low Kian Seong
    I am running a production gentoo Linux machine, and recently there was a situation where the server hung in my co-located premises and when I got there I noticed that the server was hung on what appeared to be a kernel panic hang. I rebooted the machine with a hard reboot and was disappointed to find out that I could not find a shred of evidence anywhere on why the machine hung. Is it true that when I do a hard reboot the messages itself will get lost or is there a setting I can do somewhere say in syslog-ng or maybe in sysctl to at least preserve the error log so that I can prevent such mishaps from happening in the future ? I am running a 2.6.x kernel by the way. Thanks in advance.

    Read the article

  • Performance issue when configuring non HA VM in cluster

    - by laiys
    Hi, I saw this article http://technet.microsoft.com/en-us/library/cc764243.aspx Quote taken from the link “ Important It is recommended that you not deploy virtual machines that are not highly available on your host clusters. Although you can do this by using Hyper-V (VMM does not allow it), the non-highly available virtual machines will consume resources that otherwise would be available to the HAVMs What kind of resources (CPU,memory, NIC, etc) that non HA VM will consume? Just curious as not all VM (in production) not to be in Failover Cluster and Live Migration. If i put the VM into CSV but did not make it as HA, what impact does it make since i allocate same vCPU, vNic and VMemory into the VM. (not to mention that i lost failover feature). Curious to understand more about this. Please advise. Thanks

    Read the article

  • IIS 7.5 Request Filtering logs versus UrlScan 3.1

    - by Mouffette
    When IIS 7.5 Request Filtering blocks a request it seems to add an entry into the regular IIS web logs with a 404. a) Is there any way to send the detailed Request Filtering logs to a separate file? UrlScan could specify LoggingDirectory and keep this "noise" out of our real IIS logs b) Also, is there a way to get more information that Request Filtering blocked a request? UrlScan logged the rule that caused the denial as well as control over a redirection using RejectResponseUrl which was especially convenient in non-production sites. c) If these features are important is the recommended practice to still install UrlScan 3.1 on IIS 7.5 (Windows 2008 R2) and disable Request Filtering? Any guidance is appreciated.

    Read the article

  • Plugin 'InnoDB' registration as a STORAGE ENGINE failed. On win 7

    - by NimChimpsky
    I have had to reinstall MySQL, however the service is failing to start with the above cause listed in evnt viewer. One solution is apparently to delete a couple of files prefixed with 'ib_logfile' which represent any old databases. However I do not have these files, and my service is still failing to start ... ? When I say I don't have these files I did a search using the windows search with zero results, and they are definitely not present in my mysql install directory. And I don't have the "documents and setting/appilcation data' folder referenced in link. In fact I only have only one mysql install directory, I know where that is - what do I need to delete/change ? The instance is configured OK, I ran that as administrator and it is listed in services, but the service itself fails to start Any tips, other than going over to postgresql ?

    Read the article

  • Installing Windows 7 destroyed my dual boot setup

    - by ped
    I have a laptop on which I have two drives with separate Windows XP installs, one barebones for music production, the other "normal" Windows XP with Office etc. (unfortunately the bios won't give a boot disk choice). Normally I would be presented with two Windows XPs on booting. Selecting the second one would get me into the "normal" installation on disk 1 (C:). Selecting the first in boot order would give me D:\ (disk 2) with the barebones XP. However, I installed Windows 7 Home onto disk 1 (C:), but there were no dual boot options anymore, even though I installed DualBoot Pro and added Windows XP disk D:. The options now show up, but selecting Windows XP just turns into a reboot back to where I started.

    Read the article

  • Alternative, more efficient scraping method for a noncoder, than Google doc's importxml and xpath?

    - by binarybunny
    I've searched throughout the net for a simple solution, but it seems everyone has their own unique method (coding language) of achieving this. I'm only just beginning to learn Linux, and my coding skills are thoroughly lacking (non-existent). I love the simplicity of using importxml and xpath, but copying and pasting values after reaching the spreadsheet limit of 50 is getting old. Now that I've seen the light, I would really just like to know of a simple, yet scalable solution to get more data into more spreadsheets/databases. Before I really start getting my hands dirty, I would love to know some of the ways you guys go about accomplishing this?

    Read the article

  • Which is better for running Ubuntu and other Linux OSes, Chromebook or Windows, why? [on hold]

    - by Serge
    I'm learning programming and I would like to switch to a Linux OS, perhaps Ubuntu, to continue this, but the current machine is generally getting pretty old and slow and Windows is the least favorite option for production, and I can manage getting something new right around the price range of the nicest Chromebook on the market right now. However, I have compared specs of HP Chromebook 14 with those of regular PC laptops that roughly cost the same, and the latter consistently have approximately the same and sometimes higher (like the processor speed) specs. Yet usage of Chromebooks for this purpose is pretty widespread nowadays. Is this because they were initially built for a Linux OS - and is it really THAT crucial - or are there other major factors at play here?

    Read the article

  • SQL Performance Problem IA64

    - by Vendoran
    We’ve got a performance problem in production. QA and DEV environments are 2 instances on the same physical server: Windows 2003 Enterprise SP2, 32 GB RAM, 1 Quad 3.5 GHz Intel Xeon X5270 (4 cores x64), SQL 2005 SP3 (9.0.4262), SAN Drives Prod: Windows 2003 Datacenter SP2, 64 GB RAM, 4 Dual Core 1.6 GHz Intel Family 80000002, Model 6 Itanium (8 cores IA64), SQL 2005 SP3 (9.0.4262), SAN Drives, Veritas Cluster I am seeing excessive Signal Wait Percentages ( 250%) and Page Reads /s (50) and Page Writes /s (25) are both high occasionally. I did test this query on both QA and PROD and it has the same execution plan and even the same stats: SELECT top 40000000 * INTO dbo.tmp_tbl FROM dbo.tbl GO Scan count 1, logical reads 429564, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. As you can see it’s just logical reads, however: QA: 0:48 Prod: 2:18 So It seems like a processor related issue, however I’m not sure where to go next, any ideas? Thanks, Aaron

    Read the article

  • Debian Lenny - network interfaces(eth) are in DOWN state

    - by pachanga
    Folks, I'm facing a very weird problem with one of my production servers(it's Debian Lenny) - after reboot network interfaces(eth0,eth1) are in DOWN state. Looks like an Intel based networking adapter is installed on the server, lspci lists it as follows: Ethernet controller: Intel Corporation Device 10c9 (rev 01) The kernel driver responsible for this adapter is "igb". I tried "modprobe -r igb && modprobe igb", network interfaces first disappear then appear, but they are in DOWN state again. What could go wrong? It used to be working just fine. How can this be fixed?

    Read the article

  • How to set up RAID 1 on Dell PERC S300 With Existing OS Install

    - by Daniel Dugger
    We have a server that is being used in production, but it was not originally meant to. The main thing I want to add to it is a Dell PERC S300 RAID Card to have the main hard drive (Windows Server 2008 R2) mirrored on another hard drive. I can not initialize the disk and wipe the the OS to create the array and then re-install. Is there a way to create the array with a current hard drive, without affecting it, and just mirroring the drive? If that card is not an option, is there a card that would allow that? The server is a Dell PowerEdge T110 II.

    Read the article

  • Exchange Disconnecting on EHLO with remote telnet

    - by Timothy Baldridge
    When I go to the local terminal on my Exchange box (SBS 2008) I can do this: telnet 127.0.0.1 25 220 Exchange banner here EHLO example.com 250 Server name However when I go from another box, or from the actual IP of the server I get this: telnet 192.168.21.20 25 220 Exchange banner here EHLO example.com 421 4.4.1 Connection timed out Connection to host lost. The odd thing is, this server is currently in production and working fine (receiving mail for our entire domain). But my C# programs can't send mail to it (they get this same error). Any ideas?

    Read the article

  • Unknown MySQL server host - connection problem

    - by Zukas
    I am new to databases and I have been asked to look at a few tables and see how many records they have and some other information. I cannot access phpMyAdmin through cPanel, which is how I've always done it on my own server. I decided to download MySQL Workbench. I enter in all the information is asks: Hostname: mysite.startlogicmysql.com Port: 3306 Username: user. I press connect and get this Unknown MySQL server host 'mysite.startlogicmysql.com' (11004) Am I using the wrong hostname? I've seen a server name, a hostname in the server variables list which is something like custsql.eigbox.net and the server itself is custsql.eigbox.net In both cases the custsql is a little different than what I posted. I am not sure which one to use. If there is anything else anyone needs to know I can tell you. Tanks.

    Read the article

  • Update mysql database with arpwatch textfile database

    - by bVector
    I'm looking to keep arpwatch entries in a mysql database to crossreference with other information I'm storing based on mac addresses. I've manually imported the arpwatch database into my mysql database, but being a novice with databases I'm not sure what the best way to continually update the database with new entries without creating duplicates would be. None of the fields can be unique, as even the time is duplicated frequently. I'm not interested in the actual arpwatch events like flip flop or new station, just the mac/ip/time pairings. Would a simple bash (or sql) shell script do the trick? Would it be possible to make the mac address plus the time be a composite key of some sort? the database is called utility, table is arpwatch, columns are mac, ip, time a seperate table named 'hosts' with columns mac, ip, type, hostname, location, notes has mac as the primary key. This table will correlate different ip addresses that a mac had over time using the arpwatch column initial import was done with MySQL Workbench using INSERT INTO commands with creative search and replace on the text file

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >