Search Results

Search found 58447 results on 2338 pages for 'denormalized data'.

Page 1572/2338 | < Previous Page | 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579  | Next Page >

  • Skype keypad tones

    - by Don
    Hi, When I push a number on the Skype keypad (or use the number keys on the keyboard) no tone is emitted. This happens both when dialling a number and if I push a key during a call. This makes it impossible for me to use Skype with automated telephone systems that require you to use the keypad to enter data or choose between various options. I spoke to somebody who works in a call centre about this and they indicated that somebody had mentioned that it's possible to disable (DTMF) tones in Skype. I've looked through all the Skype options and can't find any way to enable/disable DTMF tones. If somebody knows how I can do this, or has another suggestion for fixing the problem, please let me know. I'm using version 4.2.0.152 of Skype. Thanks, Don

    Read the article

  • SAN typical MTBF

    - by Adrian K
    We're using a SAN on a project at work, and there's a bit of debate around the fact that's technically it's a Single Point of Failure. No one seems to have any hard data. The SAN in question is a single physical box, but with internal redundant components (sorry - not sure3 what level of RAID it has, but I can find out). What's the tyopical MTBF for a SAN? The PM has it down on the projects risk register as "Quite Common' - I've never heard of a SAN going down, but I don't jhave any stats to show how likely it really is. Does anyone have any helpful info?

    Read the article

  • Need script to redirect STDIN & STDOUT to named pipes

    - by user54903
    I have an app that launches an authentication helper (my script) and uses STDIN/STDOUT to communicate. I want to re-direct STDIN and STDOUT from this script to two named pipes for interaction with another program. E.g.: SCRIPT_STDIN pipe1 SCRIPT_STDOUT < pipe2 Here is the flow I'm trying to accomplish: [Application] - Launches helper script, writes to helpers STDIN, reads from helpers STDOUT (example: STDIN:username,password; STDOUT:LOGIN_OK) [Helper Script] - Reads STDIN (data from app), forwards to PIPE1; reads from PIPE2, writes that back to the app on STDOUT [Other Process] - Reads from PIPE1 input, processes and returns results to PIPE2 The cat command can almost do what I want. If there were an option to copy STDIN to STDERR I could make cat do this with a command (assuming the fictitious option -e echos to STDERR rather than STDOUT): cat -e PIPE2 2PIPE1 (read from PIPE2 and write it to STDOUT, copy input, normally going to STDERR to PIPE1)

    Read the article

  • Importing GPG Key

    - by Bodo
    I have problems importing my GPG-Keys into my new installation of debian. I exportet the private-key a few years ago. Now I am trying to get everything running under a new debian. I tried to do gpg --allow-secret-key-import --import private-key.asc But I only get this: gpg: Keine gültigen OpenPGP-Daten gefunden. gpg: Anzahl insgesamt bearbeiteter Schlüssel: 0 which translates to: gpg: No valid OpenPGP-Data found gpg: Number of processed Keys : 0 The file looks correct and starts with --BEGIN PGP PRIVATE KEY BLOCK----- Version: GnuPG v1.4.9 (GNU/Linux) and ends with -----END PGP PRIVATE KEY BLOCK----- what could be wrong?

    Read the article

  • Turning a log file into a sort of circular buffer

    - by pachanga
    Folks, is there a *nix solution which would make the log file act as a circular buffer? For example, I'd like log files to store maximum 1Gb of data and discard the older entries once the limit is reached. Is it possible at all? I believe in order to achieve that a log file should be turned into some sort of special device... P.S. I'm aware of misc logrotating tools but this is not what I need. Logrotating requires lots of IO, happens usually once a day while I need a "runtime" solution.

    Read the article

  • CentOS - massive usage on loopback interface

    - by Matthew Iselin
    Hi, I have a CentOS installation which is running fairly smoothly. Today I ran ifconfig mainly to see what sort of usage has been coming across the ethernet interface, and to also check my link speed. This is what I ended up seeing for the loopback device: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:10301085132061223274 errors:0 dropped:0 overruns:0 frame:0 TX packets:13981054163812689233 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11783901785008000095 (0.6 EiB) TX bytes:10333501021200548281 (0.9 EiB) This just feels completely wrong - almost an EiB of data? Any assistance in tracking down the source of these statistics would be greatly appreciated.

    Read the article

  • Most secure way of connecting an intranet to an external server

    - by Eitan
    I have an internal server that hosts an asp.net intranet application. I want to keep it completely and utterly secure and private however we need to expose some information through a WCF service to another server which hosts our external websites which CAN be accessed by the public. What is the best way to pass information between the two servers with regards to an IT setup, while keeping the intranet in house server completely secure and inaccessible? I've heard VPN was the way to go but I wanted to be sure this was the safest way. Another question what would be the most secure way of passing data in the WCF service?

    Read the article

  • Trying to mount NFS share on Windows Machien at startup with Z: letter for all users

    - by ScottC
    Windows Server 2008 We are trying to mount a specific drive letter on a windows machine from a unix machine. We need the mount to be available to the server even if no users are logged in and to users who are logged in with If we run the command from the command prompt manually it conencts and we have access to the NFS share, and can open it and see and edit files. mount -o fileaccess=777 anon \\127.0.0.1\nav z: (ip address replaced with 127.0.0.1 for security reasons) However if we try to automate the task by making an entry in the task schedule for boot time, to execute the batch script, it adds a disconencted drive to the list in 'My Computer' but it is disconencted and when trying to access the drive an error is produced: Z: is not accessible The data area passed to a system call is too small.| Tried as administrator with highest privelidges, as SYSTEM (group) and as my user (adminstator level user) same results. Is there another way to do this? Most of the help I have found online suggest this way but it keeps failing.

    Read the article

  • Getting ZFS per dataset IO statistics (or NFS per export IO statistics)

    - by jkj
    Where do I find statistics about how IO is divided between zfs datasets? (zpool iostat only tells me how much IO a pool is experiencing.) All the relevant datasets are used through NFS, so I'd be happy with per export NFS IO statistics also. We're currently running OpenIndiana [edit] It seems that operation and byte counter are available in kstat kstat -p unix:*:vopstats_??????? ... unix:0:vopstats_2d90002:nputpage 50 unix:0:vopstats_2d90002:nread 12390785 ... unix:0:vopstats_2d90002:read_bytes 22272845340 unix:0:vopstats_2d90002:readdir_bytes 477996168 ... ...but the strange hexadecimal ID numbers have to be resolved from /etc/mnttab (better ideas?) rpool/export/home/jkj /export/home/jkj zfs rw,...,dev=2d90002 1308471917 Now writing a munin plugin to use the data...

    Read the article

  • Low-cost, Flexible Log Aggregation [closed]

    - by Dan McClain
    I'm starting to have quite the collection of Ubuntu VMs that I must manage. I'm starting to investigate Puppet for managing the configuration of all of them, and apticron to let me know what's out of date. But the issue I feel I should deal with sooner than later is log aggregation. I'd like to stay in the free/open source realm for now, seeing that we don't have much budget for something like splunk yet. In addition to syslog, I would like to collect application specific logs (We are running different apps on different machines, from nginx+passenger for rails, to Apache+Tomcat for java, to PHP for expression engine, and mysql/postgresql database server), so that we can analyze the relavent data. For now, I'm just looking to get all the logs one place.

    Read the article

  • Automatically save/download e-mail body to disk

    - by CatamountJack
    Is there a program that will allow me to connect to my mail server (IMAP) and automatically save certain new e-mails to disk? Multiple times a day I receive automated e-mail updates about pending jobs from a system that processes some information for us. The data in these e-mails is written as plain-text within the body of the message. I would like to download the newest message, parse it, and display it on my desktop. The last two parts I can manage ok - it's just the automatic downloading that is posing a challenge. I don't use Outlook (I do use Thunderbird), but would prefer not to have the client open to make this happen. I'm currently running Win7.

    Read the article

  • Best approach to utilize RamDisk for Chrome?

    - by laggingreflex
    I use a lot of tabs and after a while less recently opened tabs take some time to become responsive, which I guess is because they're being un-cached to HDD as they're not required. So after creating a Ram-Disk I have two options, use --disk-cache-dir="G:/" switch to do what it does. Or what I'm currently doing: using a directory junction for "[...]\AppData\Local\Google\Chrome\User Data\Default" to move that entire folder over to Ram-Disk. I thought this would be better than just disk-cache but what do I know. Is it? As one can guess it'll be a pain saving/loading the Ram-Disk image each time I start chrome but if it really is better than the former approach I'll write a script or something.

    Read the article

  • MSSQL 2008 License for both Web application and desktop application [closed]

    - by Angkor Wat
    I have ASP.NET web application using MSSQL express at the moment. But I want to use MSSQL 2008. But I'm NOT sure about what kind of license I should buy. I'm considering the Processor License according to this document. I'm not sure if it's the right choice. If I buy User CAL. should I buy only 1 CAL for my web application? or for all visitors who visit my web site? I also have a Windows desktop application that write/read data from the server. Do I need a seperate license with for this Windows application if I buy Processor License. Thank you for suggestion.

    Read the article

  • Error during access to .ashx page in IIS 7 on windows 2008

    - by Rodnower
    Hello, I have web site on IIS 7 on Windows 2008, when I try to access the "page" (that correctly bound to namespace in web.config) I get error: "The required page cannot be accessed because the related configuration data for the page is invalid". Very important to say that on windows 7 I have this working. May be this is because I don't see ASP.NET area in management window? If yes, can yo tell me how I install ASP.NET on IIS 7 on windows 2008? Thank you for ahead.

    Read the article

  • SFTP is not connecting to remote server

    - by Crono15
    $ sftp -vvv Remote_IP Connecting to Remote_IP... OpenSSH_5.2p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug2: ssh_connect: needpriv 0 debug1: Connecting to Remote_IP [Remote_IP] port 22. debug1: connect to address Remote_IP port 22: Operation timed out ssh: connect to host Remote_IP port 22: Operation timed out Connection closed I set up an account for SFTP only access with a chroot. I tested that on the server and it works fine. The problem is, I could not get remote SFTP access to the server to work right. The example above is what I keep on running into. I have been trying to figure out how to solve this problem for 2 days now. I am not sure if it have to do with /etc/ssh/sshd_config. Is it something that I am not aware of? I am hoping that you could help point me to the right place for this issue.

    Read the article

  • Convert Spanned Dynamic disk to Basic Help needed.

    - by Mouradb
    Hello all, Here is my scenario; Windows 2008 server on a VM Two VM disks; Disk1 OS Basic Disk2 Data and an Installed Application. Basic Durng the weekend, I was playing with this VM, I wanted to add some space to the Disk2. Created a new disk (disk3), converted it to a Dynamic volum and added this to disk 2 (disk 2 also converted to Dynamic volume) and for some reason these now are spanned volumes. just like an IDOT, I haven't taken any snapshot of this before I've made the changes. My question, is there a way I can re-convert this again to Basic? I don't want to delete and recreate the disk volumes because of the application installed on the disk 2 Any solution or tips I can use?

    Read the article

  • Boot from VHD with windows7 - bcdedit trouble

    - by Michiel Overeem
    I'm running Windows7 Enterprise, x64 version. I've created a windows7 vhd file with help of the following blog post hanselman blog After that, I've added it to my boot menu with help of another blog post hanselman blog This worked great. After that, i've upgraded my hdd. With help of clonezilla i've copied the old disk to the new disk. Next step was to copy the vhd to another partition. Then i updated the boot menu. However, the step C:\>bcdedit /set {guid} device vhd=[driveletter:]\<directory>\<vhd filename> fails with the message An error has occurred setting the element data. The request is not supported. what is happening?

    Read the article

  • Linux's best filesystem to work with 10000's of files without overloading the system I/O

    - by mhambra
    Hi all. It is known that certain AMD64 Linuxes are subject of being unresponsive under heavy disk I/O (see Gentoo forums: AMD64 system slow/unresponsive during disk access (Part 2)), unfortunately have such one. I want to put /var/tmp/portage and /usr/portage trees to a separate partition, but what FS to choose for it? Requirements: * for journaling, performance is preffered over safe data read/write operations * optimized to read/write 10000 of small files Candidates: * ext2 without any journaling * BtrFS In Phoronix tests, BtrFS had demonstrated a good random access performance (fat better than XFS thereby it may be less CPU-aggressive). However, unpacking operation seems to be faster with XFS there, but it was tested that unpacking kernel tree to XFS makes my system to react slower for 51% disregard of any renice'd processes and/or schedulers. Why no ReiserFS? Google'd this (q: reiserfs ext2 cpu): 1 Apr 2006 ... Surprisingly, the ReiserFS and the XFS used significantly more CPU to remove file tree (86% and 65%) when other FS used about 15% (Ext3 and ... Is it same now?

    Read the article

  • Use pt-table-sync to setup a new MySQL DB

    - by Generation D Systems
    I have 2 hosts (A and B). B contains a MySQL server with a database called mydb, and A contains a MySQL server with nothing (fresh install). I want to replicate the entire mydb from B to A, by running a script on A (I do not have shell access to B). Can I run this on A: pt-table-sync --execute h=b.mydomain.com,D=mydb h=a.mydomain.com I've read the docs but don't get a 100% comfort feeling (perhaps because of all the warnings about damaging your data if you don't know what you're doing). Will this work? as well, is h=a.mydomin.com necessary? (Will it route all traffic back in/out the local NIC?) can I use localhost or nothing at all?

    Read the article

  • Convert Spanned Dynamic disk to Basic Help needed.

    - by Mouradb
    Hello all, Here is my scenario; Windows 2008 server on a VM Two VM disks; Disk1 OS Basic Disk2 Data and an Installed Application. Basic Durng the weekend, I was playing with this VM, I wanted to add some space to the Disk2. Created a new disk (disk3), converted it to a Dynamic volum and added this to disk 2 (disk 2 also converted to Dynamic volume) and for some reason these now are spanned volumes. just like an IDOT, I haven't taken any snapshot of this before I've made the changes. My question, is there a way I can re-convert this again to Basic? I don't want to delete and recreate the disk volumes because of the application installed on the disk 2 Any solution or tips I can use?

    Read the article

  • FileZilla Server Configuration Problems

    - by LiamB
    I've set-up FileZilla server a Windows 2008 Machine, I then created the user, password and added a share folder which I set to Home Directory. I then connect to the server from the client computer Status: Connecting to {IP} Status: Connection established, waiting for welcome message... Response: 220-Welcome To {NAME} FTP Response: 220 {DOMAIN} Command: USER {USER} Response: 331 Password required for {USER} Command: PASS ********* Response: 230 Logged on Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode ({}DATA) Command: MLSD The connection works fine, however no remote directory is selected, it shows as "/" however uploading any file fails. Any suggestions on how to debug this more?

    Read the article

  • Cannot boot to HDD or Optical when Motherboard in AHCI mode

    - by Shevek
    I have an Abit AB9 QuadGT motherboard and am trying to swap over to AHCI mode. I have an existing Windows 7 installation which was installed under IDE mode. I have set the msahci registry setting to 0. When I try to boot in AHCI mode I get "DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER". I have tried booting with my Win 7 DVD in the optical drive. There is 1 SSD (System), 1 HDD (Data) and 2 optical drives connected via SATA If I switch back to IDE mode everything boots fine, either from the SSD or from a CD or DVD in the optical drive. Why can't I use AHCI mode?

    Read the article

  • What effect does RAID stripe size have on read-ahead settings?

    - by stbrody
    I'm trying to figure out the correct read-ahead values to set on a RAID10 array, and I'm wondering if the RAID stripe size should factor into my considerations. I've heard conflicting information about this in the past. I once heard that you should always set your read-ahead value to a multiple of the RAID stripe size, and never below the stripe size, because that is the minimum amount of data the RAID controller will ever try to read at once. Someone else told me, however, that setting read-ahead below the stripe size is fine, and can, in fact, increase the amount of parallel reads you can do across devices in the array, increasing performance and decreasing load on the array. So which is it? Do read-ahead settings that aren't multiples of the stripe size make sense or not?

    Read the article

  • Good experiences with bulk rate SMS providers?

    - by jen_h
    We're a pretty popular service, our users are currently sending 100000+ SMS messages (projected 180k this month, and continuing to grow) per month. We're currently using a primary domestic provider that doesn't provide bulk rates and doesn't provide short code access. We're using a few backup providers as well for max redundancy, but aren't thrilled by 'em. We're ideally looking for a service that provides good bulk rates/incentives, good uptime/redundancy/reputation, easy API-integration (including respectable error codes!) ;). Right now, we're looking primarily for a domestic US SMS solution, but aren't averse to using the same provider for both International & US. For those of you using bulk SMS right now - what are your recommendations, experiences, etc. in the bulk SMS domain? It sounds like I'm looking for a golden unicorn here, I know, but any data/recommendations/warnings you've got are helpful!

    Read the article

  • MySQL installation question.

    - by srtriage
    I am far from a DBA and have a question. Recently I installed MySQL. On my machine C:\ is a 50GB partition of two mirrored 10k SAS drives. The remaining space on those drives is allocated to D:. I also have a SSD mounted as E:. When I installed MySQL, I installed it to E:\ assuming that that is where the database information would be held since I had installed it there. I am now seeing C:\ProgramData\MySQL\MySQL Server 5.1\data\peq, peq being the name of my main database. Is my database being stored in C:\ and if so, how do I fix it to store the DB on the SSD?

    Read the article

< Previous Page | 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579  | Next Page >