Search Results

Search found 59041 results on 2362 pages for 'data replication'.

Page 744/2362 | < Previous Page | 740 741 742 743 744 745 746 747 748 749 750 751  | Next Page >

  • pnp4nagios does not generate perfdata

    - by gonvaled
    I am running nagios2, pnp4nagios-0.6.16 and php 5.2.4-2ubuntu5.19. In my setup, pnp4nagios is correctly generating perfdata, which can be seen via the web interface in graphical form for lots of services. The perfdata directory contains entries of the kind: /usr/local/pnp4nagios/var/perfdata/zeus/Disk_Space_Home.rrd /usr/local/pnp4nagios/var/perfdata/zeus/Disk_Space_Home.xml I have activated performance data for a new nagios service: define serviceextinfo { host_name zeus service_description 450average action_url /pnp4nagios/index.php?host=$HOSTNAME$&srv=$SERVICEDESC$ } This service is generating monitoring data in the format: status_info|perf_data as required for performance gathering. But somehow the performance data related to this service is not being collected by pnp4nagios (no related entries in /usr/local/pnp4nagios/var/perfdata) Are there any pnp4nagios scripts or settings which I could use to debug this?

    Read the article

  • How to consolidate servers with the not-very-strong infrastructure

    - by Sim
    All, Situation We are in retail industry with about 10 distributors and use Solomon as the standard ERP for all our systems Each distributor has 1 HQ and 5 - 10 branches, each branch has their own server (Windows 2000/XP/2003 + Solomon + another built-in POS system) Everyday, branches has to extract data and send (via email/Skype) to HQ for data consolidation purpose When we first deployed our ERP, the infrastructure (e.g. Internet connection) wasn't reliable enough. That's why we went with the de-centralized model (each branch got their own server) Now, the infrastructure is mature already. And we need to consolidate data more quickly (not from branches -- HQ -- our company but something like HQ -- our company only) Goal We just have Solomon servers in distributor HQ. All the transactions in branches (retrieved from POS) will by synchronized with HQ server directly) There is a backup plan just in case the Internet goes down, or HQ server goes down Question With the above question, could you guys suggests some model for me ? Should we use Terminal services, any other solutions ? Any watchout/suggestions ? Any good article to read 'bout this ? Thanks a lot

    Read the article

  • SQL Server 2008 R2 100% availability

    - by Mark Henderson
    Is there any way to provide 100% uptime on SQL Server 2008 R2? From my experience, the downtimes for the different replication methods are: Log Shipping: Lots (for DR only) Mirroring w. NLB: ~ 45 seconds Clustering: ~ 5-15 seconds And all of these solutions involve all of the connections being dropped from the source, so if the downtime is too long or the app's gateway doesn't support reconnection in the middle of task, then you're out of luck. The only way I can think to get around this is to abstract the clustering a level (by virtualising and then enabling VMWare FT. Yuck. Good luck getting this to work on a quad-socket, 32-core system anyway.). Is there any other way of providing 100% uptime of SQL Server?

    Read the article

  • migrate SharePoint to SBS Server

    - by Eric Lorson
    We have a SharePoint 2003 server and we need to migrate that data to SharePoint 2011 on a SBS server. We cannot use the migration tool because one of the servers is SBS and the other is not. We exported the SharePoint data from the old system, but the import to the SBS SharePoint is failing with very little info on why. I think that there is a schema conflict, but I am not that familiar with SBS and I am not finding the error in the Windows logs. Has anyone had to migrate data from non-SBS system to an SBS system? Or can anyone help me figure out where to look for more info on what is going on?

    Read the article

  • Issue with Exchange 2010 and Removing a Mailbox Database

    - by ThaKidd
    I did a 2003 to 2010 transition and everything is working well. During the 2010 install, a database was copied over with a random number at the end. I found out and moved three system mailboxes out of it into the database that all of the client accounts are in. I used the EMS to move those mailboxes to the other store then used the EMC to remove the mailbox database. Problem is, I am getting an error every few hours in event viewer now complaining about this database. Error is: MSExchageRepl - 4098 The Microsoft Exchange Replication service couldn't find a valid configuration for database '5f012f40-3bad-4003-a373-dbc0ffb6736f' on server 'SERVER'. Error: (nothing reported after this) Does anyone know how to fix this issue? In advance, I appreciate your help and thx for your valuable input!

    Read the article

  • Can I mark a folder as mountpoint-only?

    - by Collin
    I have a folder ~/nas which I usually use sshfs to mount a network drive on. Today, I didn't realize the share hadn't been mounted yet, and copied some data into it. It took me a bit to realize that I'd just copied data into my own local drive rather than the network share. Is there some way to mark in the system that this folder is supposed to be a mount point, and to not let anyone copy data into it? I tried the permissions solution here: How to only allow a program to write to a directory if it is mounted?, but if I don't have write access I also can't mount anything to it.

    Read the article

  • Best solution for High Availability and SSRS on SQL Server 2008 R2?

    - by Chandra
    I have 2 Physical Servers with SQL Server 2008 R2. – SQL Server 1(Active) & SQL Server 2 (Passive) Web Application is developed using .Net 4.0 Framework. I want to know the best solution to have high availability and also have SSRS for reporting. Planned solution: Mirroring for Failover, and Transaction Replication for SSRS as the mirrored database can only be used for failover scenarios. SSRS will be on the Passive server, to reduce the load on the Active server. Let me know if the solution is correct. Also suggest alternate approaches.

    Read the article

  • How to execute programs on mounted partition

    - by DevNoob
    This is the aplication I want to run. -rwxr-xr-x 1 manuel manuel 582841 Nov 22 09:51 PromServerMain This is the fstab entry /dev/sda8 /media/data0 ext4 defaults,user 0 2 This is the mountpoint lrwxrwxrwx 1 manuel manuel 5 Nov 16 14:23 data -> data0 drwxrwxr-x 9 manuel manuel 4096 Nov 22 09:26 data0 This is what I get manuel@P5KC /media/data/Projekte/PromServer/src $ ./PromServerMain bash: ./PromServerMain: Keine Berechtigung manuel@P5KC /media/data/Projekte/PromServer/src $ sudo ./PromServerMain sudo: unable to execute ./PromServerMain: Permission denied Even as root. I have no clue whats wrong. Any suggestions? System is Debian Wheezy Xfce.

    Read the article

  • Immediate Propagation in Active Directory

    - by squillman
    It's been a while since I've done any large-scale AD administration so I'm reaching back a bit here. I remember that there are certain security related attributes on a user account object that, due to their nature, are flagged for immediate propagation to other sites. I have a case where passwords resets are not being propagated until scheduled replication happens. I had thought that was a case of immediate propagation. Am I just remembering incorrectly? Domain function level is 2003.

    Read the article

  • SQL Server 2008 lincensing question relating to web servers

    - by Matty Brown
    We purchased SQL Server 2008 Standard licences last year under the server + device CAL licencing model. Since our server has 2 physical CPUs and only 46 clients, this option was by far the cheapest. Now we'd like to be able to query a small number of stored procedures from our Windows Server 2003 Web Edition server, which is in a seperate zone on our firewall. I think SQL Server 2008 Web Edition could be an option to us, but is it possible to replicate/mirror stored procedures and tables to such a server and would we be breaking any rules by doing so? Is this a form of multiplexing? Also, would replication/mirroring work both ways, if we were to want to write back data from the web server?

    Read the article

  • Remote Desktop Services In A Virtual VMWare Environment

    - by Christopher W. Szabo
    I have a quick question regarding Microsoft Remote Desktop Services in a virtualized environment using VMWare. This environment will actually be hosted in a large data center with in a cloud that is offered. This particular data center has the ability to establish high speed point to point connections with customers via metro-ethernet who are hosted in the cloud. The result is that customers can actually host their corporate domain in the data center's cloud. Put the merits of such a configuration aside for the time being. Believe me when I say that the cloud is stable and had enough hardware behind it to rival a dedicated cabinet. My question has to do with RDS in a virtual environment, which would amount to virtual desktops hosted on a virtual server. I've read that this works without issue using Hyper-V and VMWare. But before I take the plunge I wanted to get some feedback from the community.

    Read the article

  • Is there a network "tee"-alike with one leg returning to /dev/null ?

    - by Steff Davies
    I've just built a new PostgreSQL server for my employers, which is happily replicating using WALs. I'm now left with the problem of verifying its performance. One nice way which came up in conversation is to break replication with the slave caught up and then direct all production traffic to both servers, discarding the responses from the new server and returning those from the current one to the clients. Once we're sure performance is OK, we re-sync the slave and can fail over with confidence. Bliss. This would require a TCP proxy capable of opening two outgoing connections for each incoming one, and discarding the data returned from one of them, which is a tricky thing to google for, it seems. Do the assembled brains know of such a thing, before I dive into libevent and write one?

    Read the article

  • Can I use the voice & SMS features of my GSM SIM through my laptop?

    - by i..
    My laptop (a Lenovo T410s) has an internal GSM modem (device manager calls it a Qualcomm Gobi 2000 HS-USB Modem 9205) that I'm currently using a regular (voice, data, text, etc) 3G SIM in. The data functionality works great through the Lenovo software & Windows 7 but I was wondering if I can use the other features (specifically voice & SMS) through Windows. Is it posisble to use the non-data features of my 3G SIM through my Qualcomm GSM modem? If so, what software is available to this end? If not, where is the restriction? (e.g. hardware, OS, driver, software) Thanks!

    Read the article

  • MySQL on a laptop for remote workers - MyISAM keeps corrupting

    - by Jonathon
    We have an application that is used by remote, mobile workers. It intalls WAMP (Server2Go) on a laptop and uses MySQL to store data locally. All tables are MyISAM. Once a day, the workers sync the database to our central server via HTTP scripts that query the data and post it to our site. The problem is that many of these laptop database tables are corrupting continually. It appears that MySQL acts like it saves the information (I don't get any query errors), but the table gets corrupt. I have to repair the table constantly (which removes several rows of data in the process). Does anyone have any ideas about how to work around this problem? Would it be wise to switch to InnoDB on the laptops? How about a different database system altogether. I have looked at MySQL Embedded, but it appears to be the same engine as the regular MySQL.

    Read the article

  • Ubuntu 11.04 and OpenLDAP - where is the config?

    - by Tom SKelley
    I've been asked to setup a multimaster LDAP environment on Ubuntu 11.04 - instead of a single master server. I cloned the master server and recreated it into two VMs. I am trying to follow the instructions on the OpenLDAP documentation here: http://www.openldap.org/doc/admin24/replication.html and it talks about modifying the cn=config tree within LDAP. The subdirectory tree appears to be there at: /etc/ldap/slapd.d/ and a slapcat -b cn=config drops out a load of config information. When I try to connect using a browser and the admin bind credentials: ldapsearch -D '<adminDN>' -w <password> -b 'cn=config' I get: # extended LDIF # # LDAPv3 # base <> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # search result search: 2 result: 32 No such object I don't see the config context when I connect via an LDAP browser either. I'm sure I'm missing something, but I can't see what it is!

    Read the article

  • File creation time on Windows vs Linux

    - by Sergei
    We have following setup: mountserver - debian linux fileserver1 - Windows 2008 R2 Storage server fileserver2 - Celerra NS20 exporting CIFS share workstation - windows 7 with mapped drive to share on fileserver2 What we are doing: mounted share from fileserver1 on mountserver, e.g. /shared/fileserver1 mounted share from fileserver2 on mountserver, e.g. /shared/fileserver2 ran rsync on mountserver to sync data from fileserver1 to fileserver2.Used atime as parameter to sync data not older than X after a while tried to delete data older that Y on /shared/fileserver2. From what I see, linux stat command on mountserver returns following when quering file on /shared/fileserver2: At the same time when I open property for the same file using mapped drive connected to fileserver2,I see following for the same file: As you can see, Created date of 12 August shown in Windows Explorer is nowhere to be seen using stat command Am I missing something here?

    Read the article

  • Is it safe to delete "Account Unknown" entries from Windows ACLs in a domain environment?

    - by Graeme Donaldson
    It's not uncommon to see entries in Windows ACLs (NTFS files/folders, registry, AD objects, etc.) with the name "Account Unknown (SID)". Obviously these are because of old AD users or groups which at some point had permissions manually configured on the relevant object and have since been deleted. Does anyone know if it is safe to remove these "Account Unknown" ACEs? My gut feeling is that it should be just fine, but I'm wondering if anyone has any past experiences where doing this has caused trouble? Normally I just ignore these, but the company I'm working at now seems to have an abnormal number of these, most likely due to past admins' inexperience with AD/Windows and assigning permissions to user accounts rather than groups in all sorts of weird places. FWIW, our environment is not complex, a single domain forest, 4 DCs in 3 sites, with all network connectivity and replication healthy, so I'm certain that these "Account Unknown" entries are really old accounts, and not just because of some failure to resolve the SID to a human-readable name.

    Read the article

  • In TCP/IP terms, how does a download speed limiter in an office work?

    - by TessellatingHeckler
    Assume an office of people, they want to limit HTTP downloads to a max of 40% bandwidth of their internet connection speed so that it doesn't block other traffic. We say "it's not supported in your firewall", and they say the inevitable line "we used to be able to do it with our Netgear/DLink/DrayTek". Thinking about it, a download is like this: HTTP GET request Server sends file data as TCP packets Client acknowledges receipt of TCP packets Repeat until download finished. The speed is determined by how fast the server sends data to you, and how fast you acknowledge it. So, to limit download speed, you have two choices: 1) Instruct the server to send data to you more slowly - and I don't think there's any protocol feature to request that in TCP or HTTP. 2) Acknowledge packets more slowly by limiting your upload speed, and also ruin your upload speed. How do devices do this limiting? Is there a standard way?

    Read the article

  • ganglia graphs like munin for cpu, etc?

    - by CarpeNoctem
    I'm coming from munin and a CPU graph contains data for system, user, nice, etc ALL on one graph. I just installed ganglia and setup the basic monitoring. It appears that each type of cpu data is a separate graph! WTF is this and can I change the defaults to combine these into a single per host? That is my question, how do I combine cpu data into a single graph. Also, can I change the layout to something closer to munin's day-week side-by-side layout? I'm trying to be impartial and give ganglia a chance. ;)

    Read the article

  • Port 53 UDP Outgoing flood

    - by DanSpd
    Hello I am experiencing very huge problem. I have 4 computers in network, and from each a lot of data is being sent to ISP name servers. Sometimes data is being sent a little from each computer in network, sometimes it is just a lot of data from one computer. I have antivirus (Avast) and malware scan (SpyBot) I know port 53 UDP is dns which resolves domain IP so its' needed. Also I have read that ISP name server might have been infected. So what is the best thing to do in this situation. Also sometimes internet starts to lag really because of port 53

    Read the article

  • how to setup sonicwall tz210 to port forward packets received from external ip to another external ip

    - by lplp
    i have a sonicwall tz210 on a fixed ip, say ip1. And then i have, let's say a legacy server, with external ip ip2, which sends data to ip1 (and I have another server on ip1 behind the sonicwall which receives and processes that data). I would like to set up a new server on a different external ip ip3 that will receive and process data from the legacy server. How can I setup the sonicwall so that the packets received from the legacy server (from an external ip) are port forwarded to the external ip address ip3?

    Read the article

  • How is it possible that Winrar can repair any volume with one .rev file?

    - by Coldblackice
    I just learned about .rev files with Winrar -- that if you have a 10-part RAR volume, for example, plus one .rev (recovery) volume -- that .rev volume will be able to "fix" any -one- corrupted volume. How is it able to do this? Obviously there's something I'm not understanding, as I don't understand how one volume could have the "data" to fix any/all of the individual broken volumes. I'd guess that it's because the volumes aren't broken up like I tend to imagine -- each volume having individual files of the whole packed into them, but rather, it's viewed as one continuous "file", so to speak, of data. And that there must be some type of CRC'ish repair work -- But I just don't understand how you could have 9 working volumes, with 1 damaged, but a recovery volume that would be able to repair -any- one of those volumes. How is it able to hold "all" of the data in just one recovery file?

    Read the article

  • Weather Logging Software on Windows Home Server

    - by Cruiser
    I'm looking for some weather logging software that I can run as a Windows Home Server add-in, or as a service on my Home Server, so I don't need to log into my Home Server to log weather data. I have an Oregon Scientific WMR918 weather station, and an HP MediaSmart EX485 Windows Home Server. The two are currently connected through a serial bluetooth adapter, but that shouldn't matter as the computer sees it basically as a serial device. I'm currently using Cumulus to log data and upload to Weather Underground, but it is a regular windows application, so I need to remain logged into my Home Server by RDP in order to run the software (I disconnect, but don't log off so the session remains open). Ideally I would like something to run as a service or WHS add-in, so that it runs all the time without logging in, can log data from my WMR918, and can upload to Weather Underground. Thanks!

    Read the article

  • URI Scheme, launch program in its directory

    - by ZaKlaus
    I have registered URI scheme for my app. When I open it with "Run.." or in browser, it runs in hosted directory. For ex. Ive opened url in webpage, program's working dir is in browser. What I want? I want to run program test.exe located at C:\data\test.exe and to use dir. C:\data so it could use other data in relative path. so test.exe would access file .\file.txt without using absolute path Hope I wrote it understandable, sorry for bad English.

    Read the article

  • Centralize proxy settings for all the application on my workstation

    - by Leonardo
    As a consultant, I used to work at different clients premises. It happens frequently that most of them has specific proxy settings, but not all the application I have installed on my laptop get settings from system preferences, for allowing me to change settings in one place. Some of them bypass system preferences completely, proposing their own mask for entering specific data such as username, host and password. I am looking for a convenient and not much intrusive way to share a common access point on which I could enter data, and maybe persist them. An 'automatic' switching would be ideal, for example based on some network identification, but there's no problem for me to enter data manually. I am not an IT expert, but to explain myself clearly, I am looking for a solution like .pac file is for browsers. Relevant OS I am using are MacOSX and Linux (Ubuntu).

    Read the article

< Previous Page | 740 741 742 743 744 745 746 747 748 749 750 751  | Next Page >