Search Results

Search found 60391 results on 2416 pages for 'data generation'.

Page 1047/2416 | < Previous Page | 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054  | Next Page >

  • Cacti is not monitoring the correct host... How do I change SNMP target?

    - by wil
    I have been handed a Cacti server that monitors a few hosts. I noticed that three of the targets were displaying the exact same data - the cacti machine, machine a and machine b. After doing a bit of digging, I noticed that machine a and machine b had "Local Linux Machine" set under "Host Template". I have since changed the host template to "Generic SNMP-enabled Host", however, all the graphs still only display data from the local cacti machine (Updates every 5 minutes - I changed this yesterday - 12 hours). I can't think what else is wrong and was wondering if anyone knows/can recommend anything?

    Read the article

  • How to get partition information from non-booting server?

    - by gravyface
    Need to manually rebuild a mirrored array on a server and am in the process of reinstalling SBS 2003 on it. However, it's a Dell server, and know that there's the Dell FAT32 diagnostics partition, a system partition, and a data partition, but do not know the size of each. Planning on reinstalling SBS 2003, all applications on the server, and then doing a System State restore, but figured that not having the correct partitions will cause some grief: am I right? Almost thinking that the size of the partitions shouldn't matter, but not positive. Question: should I care about the size of the partition? If so, how can I get this partition information from a non-booting drive? We have an Acronis image of the one working disk and the partitions are mounted/viewable in Explorer on a workstation, but I'm not sure where the Logical Disk Manager/Disk Management data is stored and/or if there's a way to retrieve it without having a working Windows installation.

    Read the article

  • Export and import a PostgreSQL database with a different name?

    - by J. Pablo Fernández
    Is there a way to export a PostgreSQL database and later import it with another name? I'm using PostgreSQL with Rails and I often export the data from production, where the database is called blah_production and import it on development or staging with names blah_development and blah_staging. On MySQL this is trivial as the export doesn't have the database anywhere (except a comment maybe), but on PostgreSQL it seems to be impossible. Is it impossible? I've seen out there some people using sed scripts to modify the dump. I'd like to avoid that solution but if there are no alternative I'll take it. Has anybody wrote a script to alter the dump's database name ensure no data is ever altered?

    Read the article

  • How to fill in the network line in the ubuntu interfaces config file?

    - by matnagel
    I have to configure an ubuntu hardy server network interface. The service hoster told me that this is the network data for the machine: IP Range: 111.111.200.74 to 111.111.200.78 Netmask: 255.255.255.248 Broadcast: 111.111.200.79 Gateway: 111.111.200.73 Subnet: 111.111.200.72/29 I am only using the first IP address. I will update the /etc/hosts file with 111.111.200.74, but I am still unsure how the /etc/network/interfaces file should be. This is my plan: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 111.111.200.74 netmask 255.255.255.248 network 111.111.200.??? broadcast 111.111.200.79 gateway 111.111.200.73 As you can see I don't know how to build the network line. How would I calculate the data for the network line and what is the result? (I changed the first 2 octets of the subnet, they are not "111.111" in the real setup.)

    Read the article

  • Has anyone achieved true differential sync with rsync in ESXi?

    - by Julius
    Berate me later on the fact that I'm using the service console to do anything in ESXi... I've got a working rsync binary (v3.0.4) that I can use in ESXi 4.1U1. I tend to use rsync over cp when copying VM's or backups from one local datastore to another local datastore. I've used rsync to copy data from one ESXi box to another but that was just for small files. In now trying to do true differential syncs of backups taken via ghettoVCB between my primary ESXi machine and a secondary one. But even when I do this locally (one datastore to another datastore on the same ESXi machine) rsync appears to copy the files in their entirety. I've got two VMDK's totally 80GB in size, and rsync still takes anywhere between 1 and 2 hours but the VMDK's aren't growing that much daily. Below is the rsync command I'm executing. I am copying locally because ultimately these files will get copied onto a datastore created from a LUN on a remote system. Its not an rsync that'll be serviced by an rsync daemon on a remote system. rsync -avPSI VMBACKUP_2011-06-10_02-27-56/* VMBACKUP_2011-06-01_06-37-11/ --stats --itemize-changes --existing --modify-window=2 --no-whole-file sending incremental file list >f..t...... VM-flat.vmdk 42949672960 100% 15.06MB/s 0:45:20 (xfer#1, to-check=5/6) >f..t...... VM.vmdk 556 100% 4.24kB/s 0:00:00 (xfer#2, to-check=4/6) >f..t...... VM.vmx 3327 100% 25.19kB/s 0:00:00 (xfer#3, to-check=3/6) >f..t...... VM_1-flat.vmdk 42949672960 100% 12.19MB/s 0:56:01 (xfer#4, to-check=2/6) >f..t...... VM_1.vmdk 558 100% 2.51kB/s 0:00:00 (xfer#5, to-check=1/6) >f..t...... STATUS.ok 30 100% 0.02kB/s 0:00:01 (xfer#6, to-check=0/6) Number of files: 6 Number of files transferred: 6 Total file size: 85899350391 bytes Total transferred file size: 85899350391 bytes Literal data: 2429682778 bytes Matched data: 83469667613 bytes File list size: 129 File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 2432530094 Total bytes received: 5243054 sent 2432530094 bytes received 5243054 bytes 295648.92 bytes/sec total size is 85899350391 speedup is 35.24 Is this because ESXi is itself making so many changes to the VMDK's that as far as rsync is concerned the entire file has to be retransmitted? Has anyone actually achieved actual diff sync with ESXi?

    Read the article

  • SharePoint Web Analytics not tracking usage for main application

    - by Chris W
    My SP 2010 setup is two separate applications - one for the main portal and one for MySite. Whilst WebAnalytics is tracking usage of MySite it's not showing any stats for the main Portal. The only thing it lists is the number of site collections but no page views etc. The WA service is clearly running to pick up data for MySite. In Configure web analytics and health data collection everything is ticked. I can't find any obvious settings that are different between the two applications. Where should I look to get usage tracking correctly?

    Read the article

  • Any way to void document upload when user cancels?

    - by Michael Broschat
    We have developed a set of metadata fields for the user to complete during the file upload process (MOSS). What happens is that the user chooses Upload, then specifies the file on his system. Sometimes, when he sees what metadata data is required, he clicks Cancel, knowing that he cannot supply the data at that time. The file uploads, anyway, and is in the library without any attached metadata. Our client finds this unacceptable, but I haven't found a way to cancel the actual upload when the user tells us he no longer wants to do so.

    Read the article

  • Get active network interface on Windows

    - by Kevin Walzer
    I'm developing an application that provides a UI to windump, the packet sniffer. Windump has a "-D" parameter that lists all network interfaces it can find, and then you can specify which interface to listen on. However, I'd like to avoid forcing the user to manually configure which interface to listen on. On Unix, I can obtain the right network interface (en0, en1, etc.) via a call to ifconfig and some parsing of the output, but I cannot locate any equivalent Windows API or command that can yield similar information--ipconfig doesn't seem to obtain this data. Can anyone suggest either a Windows command-line tool or an API that can be called via VBScript to obtain this data so that I don't have to present the user with a dialog in my GUI telling them to select the right interface?

    Read the article

  • losetup does not decrypt device in Ubuntu 11.4 as before

    - by Kay
    I had an external volume mounted using losetup for about two years. It was created using Ubuntu 9.4 and I used the same Ubuntu installation throughout all dist upgrades. Now as I bought a new laptop I set up a fresh Ubuntu 11.4 installation on it. Problem is: losetup -e twofish /dev/loop0 /dev/sdb2 does not decrypt the volume anymore. The data in /dev/loop0 contains apparently random data. I am sure I entered the correct password. I modprobe'd cryptoloop and twofish. My question is: Has Canonical done some obscure changes to losetup like adding a salt? Does losetup depend on configuration files I did not know about? How can I decrypt the volume on my now laptop?

    Read the article

  • VSFTPD uploaded file permissions

    - by Katafalkas
    Let me first say that there are loads of topics regarding this, and I am sure i have seen them all by now. Still non of the solutions seem to help. I installed vsftpd. created a user ftp-data. Now I need that files uploaded by user ftp-data would have 755 permissions. Solving this should be as easy as adding: local_umask=002 file_open_mode=0755 but that did not help, then I have tried a number of variations of this, still did not help. The I added: chmod_enable=YES still did not help. At the moment I think that I am missing something very simple and obvious, just cant find it. Maybe someone could help me to find what I am missing. This is my config file: anonymous_enable=NO local_enable=YES write_enable=YES local_umask=002 anon_upload_enable=NO anon_mkdir_write_enable=NO dirmessage_enable=NO xferlog_enable=YES connect_from_port_20=YES xferlog_file=/var/log/xferlog listen=YES local_root=/var/www/ftp-gallery pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES

    Read the article

  • How to repair unbootable Fedora install

    - by Cerin
    How do you repair/reinstall Fedora without deleting any existing partitions or data? I was attempting to upgrade some old Fedora 13 servers to 17, following the instructions in the wiki. After the 14-15 upgrade, rebooting resulted in the output: Dropping to debug shell. sh: can't access tty; job control turned off dracut:/# Running dmesg also shows: dracut Warning: No root device "block:/dev/mapper/VolGroup-lv_root" found Googling shows this error is typically related to some weird RAID issues, but my server is a virtual machine not using any RAID. Using a rescue CD, I can chroot /mnt/sysimage, and all packages and data still seems to be there. How do I make the system bootable again?

    Read the article

  • What parameters to mdadm, to re-create md device with payload starting at 0x22000 position on backing storage?

    - by Adam Ryczkowski
    I try to recover from mdadm raid disaster, which happened when moving from ubuntu server 10.04 to 12.04. I know the correct order of devices from dmesg log, but given this information, I still cannot access the data. The superblocks look messy; the mdadm --examine for each disk is on this question on askubuntu By inspecting the raw contents of backing storage, I found the beginning of my data (the LUKS container in my case) at position 0x22000 relative to the beginning of the first partition in the raid. Question: What is the combination of options issued to "mdadm --create" to re-create mdadm that starts with the given offset? Bitmap size? PS. The relevant information from syslog when the system was healthy are pasted here.

    Read the article

  • store image installation error in UEC

    - by selvakumar
    to my college final year project we planned to setup the private cloud on the two machines. I recently installed Ubuntu Enterprise Cloud (UEC) on two of my machines . I was trying to install the store image through WebUI. I was able to download Ubuntu 10.04 - (i386) image but while installing, it's giving me following error: - Command 'euca-upload-bundle' returned status code 1: Checking bucket: image-store-1296600766 Traceback (most recent call last): File "/usr/bin/euca-upload-bundle", line 231, in main() File "/usr/bin/euca-upload-bundle", line 214, in main bucket_instance = ensure_bucket(conn, bucket, canned_acl) File "/usr/bin/euca-upload-bundle", line 87, in ensure_bucket bucket_instance = connection.get_bucket(bucket) File "/usr/lib/pymodules/python2.6/boto/s3/connection.py", line 275, in get_bucket rs = bucket.get_all_keys(headers, maxkeys=0) File "/usr/lib/pymodules/python2.6/boto/s3/bucket.py", line 204, in get_all_keys headers=headers, query_args=s) File "/usr/lib/pymodules/python2.6/boto/s3/connection.py", line 342, in make_request data, host, auth_path, sender) File "/usr/lib/pymodules/python2.6/boto/connection.py", line 459, in make_request return self._mexe(method, path, data, headers, host, sender) File "/usr/lib/pymodules/python2.6/boto/connection.py", line 437, in _mexe raise e socket.error: [Errno 110] Connection timed out could anyone please help me?

    Read the article

  • Tail the filename, not the file

    - by Craig Walker
    In UNIX (OS X BSD to be precise), I have a "tail -f" command on a log file. From time to time I want to delete this log file so I can more easily review it in my text editor. I delete the file, and then my program recreates it after new activity. However, my tail command (and anything else that was watching the old log file) doesn't update; it's still watching the old, deleted log file. I think I understand why this is (file names simply being pointers to blocks of file data). I'd like to know how I can work around this. Ideally, my tail command (and anything else I point to the file) would be able to read the data from the new file when the file name has been deleted and recreated. How would I do this?

    Read the article

  • Generating documents with templating from a form

    - by Anna
    Hello, I would like to create a document generator with templating. The workflow should be as following: The user input data to a static form (simple text input). The user chooses a graphically designed template. A document with the chosen template containing the user data is generated. The initial templates repository is prepared in advance, but it should be easy to add new templates to the process. I have the full MS Office suite and the preferred file format is an MS .doc. I can do a little VB scripting if needed, but I prefer not to. Any advice would be greatly appreciated. Thank you, Anna

    Read the article

  • busybox does not display the throughput value at the end of a FTP session?

    - by rockyurock
    Hello, why busybox does not display the throughput value at the end of a FTP session? Or it is some version specific ? i heard that some version of busybox displays Tput value at the end of data transfer but i know that specific version .. i typed the below command but i did not get any throughput status.. "busybox ftpget -v -u user -p Password ip abc.txt abc.txt" could anybody please let me know how can i get the tput value for UL/DL data transfer? Also how can we get the status of Tput value at the client side if we do busybox ftpput operation?? regds rocky

    Read the article

  • Building new GIS Workstation - is it worth upgrading to a workstation GPU?

    - by bsigrist
    We are currently building a machine from scratch to act as a GIS workstation. The primary software used is ESRI's ArcGIS and we are mainly working with vector data using raster data only for contextual background imagery. In the past I have built a GIS machine and used a consumer grade gaming GPU (Nvidia 9800GT) and found it to perform fine. However, I have always wondered if I would have been better off equipping it with a workstation GPU such as a Quadro series. Would a workstation GPU make a noticeable difference doing 2D GIS operations or should I save money on the build and equip it with another 9800GT?

    Read the article

  • Un-table a cell range in Excel 2007

    - by Joe
    In Excel 2007, if you highlight a block of cells and then "Format as Table", it doesn't just apply colors and formatting, it somehow marks those cells as being a table. Now I want to get rid of the table, but keep all the cells (i.e. keep the data). So I tried clearing the table style and formatting, but Excel still recognizes those cells as being a table. I can tell because: When I select a cell that was in the table, Excel still displays the "Table Tools / Design" tab I cannot merge cells that were in the table <- this is what's annoying me So, how do I un-table those cells? I want to keep all the cell data and formatting, but have Excel not recognize them as a table.

    Read the article

  • Can not RDP to Win 2003 box or initiate remote restart

    - by Richard West
    I've got a Windows 2003 server that's at my remote data center. This morning I tried to connect to it via RDP, but the connection fails with the following error: This computer can't connect to the remote computer. Try connecting again. If the problem continues, contact the owner of the remote computer or your network administrator. I have also trying issuing a remote shutdown/restart command using the "shutdown -i" command from my local system. No error is reported, however the system does not reboot. This server runs SQL Server 2005 and it is still fully operational and responsive to queries. I can also remotely connect to the services control panel of the remote system. Is there anything that I can try to regain control of the system, short of having an operator in the data center do a hard reboot on the server for me?

    Read the article

  • OS X server large scale storage and backup

    - by user135217
    I really hope this question doesn't come across as trolling or asking for buying advice. It's not intended. I've just started working for a small ad agency (40 employees). I actually quit being a system administrator a few years ago (too stressful!), but the company we're currently outsourcing our IT stuff to is doing such a bad job that I've felt compelled to get involved and do what I can to improve things. At the moment, all the company's data is stored on an 8TB external firewire drive attached to a Mac Mini running OS X Server 10.6, which provides filesharing (using AFP) for the whole company. There is a single backup drive, which is actually a caddy containing two 3TB hard drives arranged in RAID 0 (arrggghhhh!), which someone brings in as and when and copies over all the data using Carbon Copy Cloner. That's the entirety of the infrastructure, and the whole backup and restore strategy. I've been having sleepless nights. I've just started augmenting the backup process with FreeBSD, ZFS, sparse bundles and snapshot sends to get everything offsite. I think this is a workable behind the scenes solution, but for people's day to day use I'm struggling. Given the quantity and importance of the data, I think we should really be looking towards enterprise level storage solutions, high availability and so on, but the whole company is all Mac all the time, and I cannot find equipment that will do what we need. No more Xserve; no rack storage; no large scale storage at all apart from that Pegasus R6 that doesn't seem all that great; the Mac Pro has fibre channel, but it's not a real server and it's ludicrously expensive; Xsan looks like it's on the way out; things like heartbeatd and failoverd have apparently been removed from Lion Server; the new Mac Mini only has thunderbolt which severely limits our choices; the list goes on and on. I'm really, really not trying to troll here. I love Macs, but I just genuinely don't know where I'm supposed to look for server stuff. I have considered Linux or FreeBSD and netatalk for serving files with all the server-y goodness those OSes bring, but some the things I've read make me wonder if it's really the way to go. Also, in my own (admittedly quite cursory) experiments with it, I've struggled to get decent transfer speeds. I guess there's also the possibility of switching everyone off AFP and making them use SMB or NFS, but I understand that this can cause big problems with resource forks and file locks. I figure there must be plenty of all Mac companies out there. If you're the sysadmin at one, what do you use? Any suggestions very gratefully received.

    Read the article

  • Mercurial not receiving push

    - by Jeffrey04
    I have a mercurial web-frontend (hgwebdir.cgi) installed on a server, and an installation of nginx was installed in front of it as a reverse proxy to the web-frontend as my friend suggested. However, whenever a large changeset is pushed (via a script), it would fail. I found an issue ticket @google-code that describe similar problem, and there is a solution that says (#39) So the server side answer is: don't send the 401 back early. Be as slow/dumb as 'hg serve' and make the hg client send the bundle twice. How do I do that? My current nginx config location /repo/testdomain.com { rewrite ^(.*) http://bpj.kkr.gov.my$1/hgwebdir.cgi; } location /repo/testdomain.com/ { rewrite ^(.*) http://bpj.kkr.gov.my$1hgwebdir.cgi; } location /repo/testdomain.com/hgwebdir.cgi { proxy_pass http://localhost:81/repo/testdomain.com/hgwebdir.cgi; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffering on; client_max_body_size 4096M; proxy_read_timeout 30000; proxy_send_timeout 30000; } From the access log we keep seeing 408 entries incoming.ip.address - - [18/Nov/2009:08:29:31 +0800] "POST /repo/testdomain.com/hgwebdir.cgi/example_repository?cmd=unbundle&heads=73121b2b6159afc47cc3a028060902883d5b1e74 HTTP/1.1" 408 0 "-" "mercurial/proto-1.0" incoming.ip.address - - [18/Nov/2009:08:37:14 +0800] "POST /repo/testdomain.com/hgwebdir.cgi/example_repository?cmd=unbundle&heads=73121b2b6159afc47cc3a028060902883d5b1e74 HTTP/1.1" 408 0 "-" "mercurial/proto-1.0" Is there anything else I can do on the server because solving it on the server side is preferable :/ Further Findings Bitbucket seems to have this solved ( Check liquidhg bitbucket project and the Diagnosis wiki page ) on the server side, can't find the config anywhere though :/ What happens next varies depending on your server. Some servers refuse the BODY, simplying closing the pipe from the client and causing Mercurial to fail. Some, like Apache (at least the way I configure it, and that could be part of the problem) and nginx (they way BitBucket.org configures it), accept the BODY, though it may take a few retries. Bottom line: if Mercurial doesn't fail the push, it sends the changeset data at least once to a server that has already told it it lacks credentials (more on this at Blame). Assuming Mercurial is still running, it resends the "unbundle" request and data, this time with authentication. Finally, Apache accepts the data successfully. Nginx, OTOH, at least under BitBucket's configuration, seems to reassemble the previous body (the one that lacked authentication) and somehow keep Mercurial from re-sending the whole body.

    Read the article

  • compare the contents of two folders that are replicating by dfs

    - by Funky Si
    I have a large folder that I am replicating by dfs and I want to check that all files have been replicated correctly. Currently I am running the following script at both ends. cd e:\data\shared\ dir /a:-h /b /s > e:\data\shared\result.txt and then using a text editor to tidy the file before using a diff tool to compare them. Does anyone know a better way of doing this? Failing that does anyone know how to adapt my script to ignore all the files in the DfsrPrivate folders

    Read the article

  • Security of BitLocker with no PIN from WinPE?

    - by Scott Bussinger
    Say you have a computer with the system drive encrypted by BitLocker and you're not using a PIN so the computer will boot up unattended. What happens if an attacker boots the system up into the Windows Preinstallation Environment? Will they have access to the encrypted drive? Does it change if you have a TPM vs. using only a USB startup key? What I'm trying to determine is whether the TPM / USB startup key is usable without booting from the original operating system. In other words, if you're using a USB startup key and the machine is rebooted normally then the data would still be protected unless an attacker was able to log in. But what if the hacker just boots the server into a Windows Preinstallation Environment with the USB startup key plugged in? Would they then have access to the data? Or would that require the recovery key? Ideally the recovery key would be required when booted like this, but I haven't seen this documented anywhere.

    Read the article

  • What tools can be used to monitor a web application? Beyond "doesn't 404"

    - by Freiheit
    I have an internal web application that has recently gone through a major version upgrade. I would like to monitor this application over the weekend and look for 'soft' errors. I will still need to spot check things by hand, but there are some common failure patterns that I think I can automate. Examples include data with bad formatting, blank rows in tables (indicates missing non-critical data), patterns for identifiers ("TEST" means one of my devs left a testing feed on), etc. I think there are applications out there that can be scripted to do things like: 1. log in 2. Go to $URL 3. select 3rd link in $LIST or $PATTERN 4. Check HTML from that link for $PATTERNS 5. Email report Are these goals sane? What applications/tools can help with this?

    Read the article

< Previous Page | 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054  | Next Page >