Search Results

Search found 67075 results on 2683 pages for 'data model'.

Page 909/2683 | < Previous Page | 905 906 907 908 909 910 911 912 913 914 915 916  | Next Page >

  • Split Tunnel VPN using incorrect Tunnel

    - by Brian Schmeltz
    Our company has a handful of field offices that have recently been setup with a regular internet connection after we removed the T1 and router that connected them directly to our network. Now, when the users are in the office, they log in to the VPN to be able to connect to the network. For the sake of them being able to print and scan from the local multi-function we have setup a split tunnel VPN. We currently have about 15-20 users using this setup around the country without any problems. Recently one of our users started having problems accessing internal programs/sites when connecting from both home and the office. There are three other users in the same office and they do not have this problem. I assumed that it was something with the computer and went ahead and replaced it with another of the same model. The computer worked fine in our home office; however, when the user received it, she had the exact same problem both at home and in the field office. Thinking it may be a NIC driver issue I sent her another computer, this time a different model, same problem occurred. If I update the host file to point to the correct paths, things will work, and if I connect via a normal VPN connection everything works, but the user cannot scan or print - which is a problem. Have tried to find ways to create another tunnel on a normal VPN and have tried to find ways to force the correct tunnel on the split tunnel VPN. It appears that there is something related to the ISP because if I connect to Comcast or Verizon it is fine but once she connects to Insite then she has problems. I have been unable to get any support from Insite as they don't feel the issue is with them. We use a Nortel VPN client. Any thoughts or ideas would be appreciated.

    Read the article

  • Win 7 (SP1) Creationg of System Image failure... seems absurd. (0x80780119)

    - by DSKauai16
    This question seems to keep coming up in Microsoft Answers. Unfortunately, it never seems to be solved... on Microsoft Answers. Hopefully someone here will have a better idea. I'm trying to create a system image. It's not working. I have just re-installed windows as of a few hours ago. I have an admittedly older Western Digital 148 GB USB portable HDD which is not just completely empty, but I even did a long format with the NTFS file system just to be sure. The reinstallation of windows and a couple of programs take up 40 GB (seen below) When I try to create a System Image onto the USB HDD, I am told that there is not enough room. Obviously, this is wrong. The screen captures show exactly the steps that are happening to lead me to 0x80780119. I haven't done anything but put on Windows 7 (32bit; SP1) and install some programs. I haven't even done anything with them. There's almost zero in the way of data, and I checked that yes, the HDD works to accept data, send data and work with data. I've used the drive successfully for a long, long time. What's going wrong?

    Read the article

  • Memory upgrade to 8GB on unibody MacBook

    - by dlamblin
    I bought the 13" unibody MacBook one month prior to it's upgrade to become the new improved 13" unibody MacBook Pro (with unopenable battery compartment, extra 3 hours of battery life, more color gammut, SD slot, firewire 800 port, and a new 8Gb memory limit). My MacBook says it is limited to 4GB of DDR3 1066mhz memory, in 2 SO-DIMMs. But that was written back when you couldn't get 2 4GB SO-DIMMs. Now that you can get them, the very similar MacBook Pro is shipping with 4GB standard, and can be upgraded to 8GB. I've asked (Apple reps), and I'm repeatedly told that my model cannot be upgraded to 8. When I ask for the reason they alway say: "Because that's the published limit at the time your mac was built." I find this unconvincing. If they said: "Because the memory controller in the chipset is limited to 4GB, despite being seemingly identical to the memory controller in the same chipset newest MacBook model," then I'd just take their word for it. Has anyone tried it, or found any research as to whether two 4GB DDR3 1066mhz SO-DIMMs can be installed in the unibody MacBook without FireWire?

    Read the article

  • Linux (DUP!) ping packages

    - by Darkmage
    i cant seem t figure out what is going on here. The Linux machine I am using is running as a VM on a Win7 machine using Virtual Box running as a service. If i ping the win7 Host i get ok result. root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 192.168.1.100 PING 192.168.1.100 (192.168.1.100) 10(38) bytes of data. 18 bytes from 192.168.1.100: icmp_seq=1 ttl=128 time=1.78 ms 18 bytes from 192.168.1.100: icmp_seq=2 ttl=128 time=1.68 ms if i ping localhost im ok root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 localhost PING localhost (127.0.0.1) 10(38) bytes of data. 18 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.255 ms 18 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.221 ms but if i ping gateway i get DUP packets root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 10(38) bytes of data. 18 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.27 ms 18 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.46 ms (DUP!) 18 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=22.1 ms 18 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=22.4 ms (DUP!) if i ping other machine on same LAN i stil get dups. pinging remote hosts also gives (DUP!) result root@Virtual-Box:/home/glennwiz# ping -c 100000 -s 10 -i 0.02 www.vg.no PING www.vg.no (195.88.55.16) 10(38) bytes of data. 18 bytes from www.vg.no (195.88.55.16): icmp_seq=1 ttl=245 time=10.0 ms 18 bytes from www.vg.no (195.88.55.16): icmp_seq=1 ttl=245 time=10.3 ms (DUP!) 18 bytes from www.vg.no (195.88.55.16): icmp_seq=2 ttl=245 time=10.3 ms 18 bytes from www.vg.no (195.88.55.16): icmp_seq=2 ttl=245 time=10.6 ms (DUP!)

    Read the article

  • Gwibber on Ubuntu doesn't open

    - by Radian
    Gwibber Doesn't open . When I tried to Open it from Command Line I got this error ** (gwibber:3752): WARNING **: Trying to register gtype 'WnckWindowState' as enum when in fact it is of type 'GFlags' ** (gwibber:3752): WARNING **: Trying to register gtype 'WnckWindowActions' as enum when in fact it is of type 'GFlags' ** (gwibber:3752): WARNING **: Trying to register gtype 'WnckWindowMoveResizeMask' as enum when in fact it is of type 'GFlags' No dbus monitor yet Updating... ERROR:dbus.proxies:Introspect error on com.Gwibber.Service:/com/gwibber/Service: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Message did not receive a reply (timeout by message bus) Traceback (most recent call last): File "/usr/bin/gwibber", line 67, in client.Client() File "/usr/lib/python2.6/dist-packages/gwibber/client.py", line 447, in init self.w = GwibberClient() File "/usr/lib/python2.6/dist-packages/gwibber/client.py", line 29, in init self.model = gwui.Model() File "/usr/lib/python2.6/dist-packages/gwibber/gwui.py", line 43, in init self.services = json.loads(self.daemon.GetServices()) File "/usr/lib/pymodules/python2.6/dbus/proxies.py", line 68, in call return self._proxy_method(*args, **keywords) File "/usr/lib/pymodules/python2.6/dbus/proxies.py", line 140, in call **keywords) File "/usr/lib/pymodules/python2.6/dbus/connection.py", line 620, in call_blocking message, timeout) dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Message did not receive a reply (timeout by message bus) I tried to remove it and to Install again but It has the same error It was Working probably , then suddenly it didn't

    Read the article

  • Postfix Relay to Office365

    - by woodsbw
    I am trying to setup a Postfix server on a Linux box to relay all mail to our Office365 (Exchange, hosted by Microsoft) mail server, but, I keep getting an error regarding the sending address: BB338140DC1: to= relay=pod51010.outlook.com[157.56.234.118]:587, delay=7.6, delays=0.01/0/2.5/5.1, dsn=5.7.1, status=bounced (host pod51010.outlook.com[157.56.234.118] said: 550 5.7.1 Client does not have permissions to send as this sender (in reply to end of DATA command)) Office 365 requires that the sending address in the MAIL FROM and From: header be the same as the address used to authenticate. I have tried everything I can think of in the config to get this working. My postconf -n: append_dot_mydomain = no biff = no config_directory = /etc/postfix debug_peer_list = 127.0.0.1 inet_interfaces = loopback-only inet_protocols = all mailbox_size_limit = 0 mydestination = xxxxx, localhost.localdomain, localhost myhostname = localhost mynetworks = 127.0.0.0/8 recipient_delimiter = + relay_domains = our.doamin relayhost = [pod51010.outlook.com]:587 sender_canonical_classes = envelope_sender sender_canonical_maps = hash:/etc/postfix/sender_canonical smtp_always_send_ehlo = yes smtp_sasl_auth_enable = yes smtp_sasl_mechanism_filter = login smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = smtp_tls_CAfile = /etc/postfix/cacert.pem smtp_tls_loglevel = 1 smtp_tls_security_level = may smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes sender_canonical: www-data [email protected] root [email protected] www-data@localhost [email protected] root@localhost [email protected] Also, sasl_passwd is set to the correct credentials (tested them using swaks multiple times.) Authentication works, and sends the message when the from headers are correct (also tested using swaks....which works) The emails are coming from PHP, so I have also tried altering the sendmail path in php.ini to use pass the correct from address via -f So, for some reason, mail coming from www-data and root are not having the from fields rewritten to Office 365's satisfaction, and it won't send the message. Any postfix gurus out there that can help me setup this relay?

    Read the article

  • CSV engine on MySQL server

    - by Jeff
    I don't think that this is a programming question so I am going to ask it here - Reading the book high performance mysql, I read about the CSV engine. The paragraph says: The CSV engine can treat comma-separated values (CSV) files as table, but it does not support indexes on them. This engine lets you copy files in and out of the database while the server is running. If you export a CSV file from a spreadsheet and save it in the MySQL server's data directory, the server can read it immediately. Similary, if you write data to a CSV table, an external program can read it right away. CSV tables are especially useful as a data interchange format and for certain kinds of logging. What I get from this paragraph is that I can copy a .CSV file into the data directory of database, and it should show as a table that is able to be read from. However, whenever I copy a test .csv file into the directory, it does not appear as a table. I can't access it. I am using MySQL 5.5 also Does anyone know why this is not working, or what I am doing wrong? Thanks

    Read the article

  • Is there any limit to AIX 5.3 pipe size ?

    - by snowflake
    Hello, I'm in trouble while performing cat/tail/head operation on large files on Aix 5.3. When asking for a cat of several 1Go file redirected to another one: cat file1 file2 file3 > outputfile The outputfile is limited to 2Go (cat: output error and result file is 2147483647 bytes) Filesystem is jfs2. I successfully uploaded through ftp 10Go files on the filesystem without problem. I found nothing relevant in etc/security/limits: default: fsize = -1 core = 2097151 cpu = -1 data = 262144 rss = 65536 stack = 65536 nofiles = 20000 ulimit -a core file size (blocks) unlimited data seg size (kbytes) 245759 file size (blocks) unlimited max memory size (kbytes) unlimited open files 2000 pipe size (512 bytes) 64 stack size (kbytes) 32768 cpu time (seconds) unlimited max user processes 2048 virtual memory (kbytes) 278527 The problem does not occur on another AIX 5.3 server, I'm just looking for a different configuration that might be the source of the problem. /etc/security/limits on the server without the problem: default: fsize = -1 core = 2097151 cpu = -1 data = 262144 rss = 65536 stack = 65536 nofiles = 20000 ulimit -a on the server without the problem: core file size (blocks, -c) 1048575 data seg size (kbytes, -d) 131072 file size (blocks, -f) unlimited max memory size (kbytes, -m) 32768 open files (-n) 20000 pipe size (512 bytes, -p) 64 stack size (kbytes, -s) 32768 cpu time (seconds, -t) unlimited max user processes (-u) 262144 virtual memory (kbytes, -v) unlimited

    Read the article

  • Windows Bluescreen - atikmpag.sys

    - by Mochan
    Information Name: atikmpag.sys bluescreen (BSOD or BlueScreen of Death) Error code: 0x00000116 Appears when: Playing games, watching videos Can be reproduced: Yes Cause: Graphics Card is the main assumption System Specifications Before we begin - I will inform you of my specifications. OS: Windows 7 x64 Home Edition Model: Dell Inspiron 15R Special Edition (aka Inspiron 7520) (Add 2GB of RAM to the model linked) Hard Drive: 1TB CPU: Intel Quad-Core i7 Sandy Bridge (I think) Processor at 2.10GHz (I think it can be clocked to 3GHz?) RAM: 6GB (I think 1 x 4GB and 1 x 2GB) Display: 15.6" HD (1366x768) Graphics: AMD Radeon HD 7500M 2GB Details So now that you know some basics about my computer, I'll get to the problem. Being an Ubuntu user I hardly use Windows, but occasionally I do. Like to run Skyrim and other games incompatible with Linux and WINE. The new Sims 3 Seasons patch is also now not supported. When playing these two games and other ones, theoretically. I have also heard others saying that while watching HD movies and video series it also happens. While watching the bluescreen as it happens, I see it is the 'atikmpag.sys' error. I have not installed much and nothing significant. I think I have downloaded Skyrim, Firefox and The Sims 3. I haven't done much more... since Ubuntu is definitely the best in comparison! (No hate, just a joke :P). I can reproduce it easily (just by running a game for less than a minute). It is always there each time, but it's never at a specific time or anything. So far I have found that it may be caused by lack of power to the graphics card, or it may be damaged or fried. Since I've had the computer for a mere 4 months (and have had other problems with it also). I have contacted Dell but they are useless beyond belief. Anyone with any information, solutions or details are encouraged to share your knowledge, as it would be immensely appreciated.

    Read the article

  • PostgreSQL 8.4 - Tablespace Optimization

    - by FloE
    I'm currently running a PostgreSQL Database with about 1.5 billion rows / 500 GB of data (including indices). There are several schemata: on for the (read only, irregular changes / updates) 'core-model' and one for every user (about 20 persons). The users can access the core and store data in their own schema, so everything is located in one database. The server runs with CentOS and PostgreSQL 8.4 and is used for scientific studies, exploration etc and is running quite well. These days an upgrade of the DB storage hard disks arrive - all with the same performance as the old ones. I'm looking for the best way to distribute the data on these disks. It would be possible to separate frequently used objects (the core-data) from the user schemata, but I'm not sure if this is really worth the effort. It seems to be a much better idea to move the WAL files (pg_xlog directory) to its own partition. http://www.postgresql.org/docs/8.4/static/wal-internals.html What are your opinions? Are there any tablespace- or partitioning-related performance documentations / benchmarks?

    Read the article

  • Mysql ndb cluster - node restart.

    - by Arafat
    Hi guys! I just setup a mysql cluster on a fairly decent baby (IBM x3650 M3) with 24GB memory, xeon 6core, SAS 6Gbps HDD. Running Debian Lenny 5. 64bits. Ndb version is 7.1.9a. Our database size on MyISAM is around 3.2 GB. Ndb_size estimation is 58GB for ndbengine. A little info about my database is as follows. 150 common tables for global purpose. 130 tables for each clients. So it goes like this, 130 x 115(clients) = 14950 tables. Is it normal or usual to have 14000 tables on one database? The reasons why we did this was, Easy maintenance and per client based customization. Now, the problem is, ndb cluster can only support, 20320 tables. But it can support 5,000,000,000 rows in one table if I'm not wrong. My real head ache is my cluster data node takes less than two minutes to startup with out any data. But as soon as convert my tables into ndb, that too only 2000 tables, data node takes at least 30 to 40 mins to start up. Is it normal? If I convertt all my tables into ndb, will it take even longer? Or let's say if consolidate my 14000 table's data into one, which is 130 tables, will it help? Or is there anything idiotically wrong which I'm doing? I'll attach my config.ini file soon. here's the simple overview of my config Datamemory = 14G Indexmemory = 3GB Maxnooftable = 14000 Maxnoofattributes = 78000 I'm just testing these values with 2000 tables first. Please advise, how to increase the start up speed. Please point out where I'm going wrong. Thanks in advance guys!

    Read the article

  • lm-sensors - always returns 32 degrees (celsius) for temperature

    - by mopoke
    On my VIA EPIA motherboard (using VIA VT8231 ISA bridge), I get strange output for the lm-sensors temperature reading. It always returns 32 degrees (celsius). I have previously had correct output for temperature (my munin graphs show temperatures typically in the range of 50 to 60 degrees. I've tried uninstalling (and purging) the lm-sensors package, have re-run sensors-detect a number of times and rebooted but nothing seems to change the output. I am running Ubuntu Karmic Koala (9.10). Anyone got any bright ideas on what I might have missed? uname -a: Linux george 2.6.31-16-386 #53-Ubuntu SMP Tue Dec 8 06:39:34 UTC 2009 i686 GNU/Linux cpuinfo: processor : 0 vendor_id : CentaurHauls cpu family : 6 model : 7 model name : VIA Samuel 2 stepping : 3 cpu MHz : 399.000 cache size : 64 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu de tsc msr cx8 mtrr pge mmx 3dnow up bogomips : 800.04 clflush size : 32 power management: lspci: 00:00.0 Host bridge: VIA Technologies, Inc. VT8601 [Apollo ProMedia] (rev 05) 00:01.0 PCI bridge: VIA Technologies, Inc. VT8601 [Apollo ProMedia AGP] 00:11.0 ISA bridge: VIA Technologies, Inc. VT8231 [PCI-to-ISA Bridge] (rev 10) 00:11.1 IDE interface: VIA Technologies, Inc. VT82C586A/B/VT82C686/A/B/VT823x/A/C PIPC Bus Master IDE (rev 06) 00:11.2 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 1e) 00:11.3 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 1e) 00:11.4 Bridge: VIA Technologies, Inc. VT8235 ACPI (rev 10) 00:11.5 Multimedia audio controller: VIA Technologies, Inc. VT82C686 AC97 Audio Controller (rev 40) 00:12.0 Ethernet controller: VIA Technologies, Inc. VT6102 [Rhine-II] (rev 51) 01:00.0 VGA compatible controller: Trident Microsystems CyberBlade/i1 (rev 6a) sensors: acpitz-virtual-0 Adapter: Virtual device temp1: +32.0°C (crit = +60.0°C)

    Read the article

  • Ubuntu lucid, maverick high iowait

    - by netom
    I'm using Ubuntu, and I've got the same problem with Lucid and Maverick. From time to time, especially a few minutes after boot, the iowait goes between 50-100% and the box is unusable. Everything that tries to access the disk freezes. I have the following setup: Hard disk: Model Family: Western Digital Caviar Green family Device Model: WDC WD15EADS-00P8B0 Serial Number: WD-WMAVU0391287 Firmware Version: 01.00A01 User Capacity: 1.500.301.910.016 bytes I have a quad core Intel Core2 Q6600 processor, and 4G of memory. When the high iowait occurs, usually 4 processes are active: kdmflush (two procs) jbd2/dm-0-8 jbd2/db-1-8 and a few more starving user processes of course. I know this from top and iotop. Any suggestions about why this is happening? There are a lot of q/a-s about linux and high iowait, but none of them helped so far, I even tweaked the hard disk not to park the head in every 8 seconds (Load cycle count is 50334!!!!! :o ), but nothing. Problem persists. Thank you in advance.

    Read the article

  • Nginx + PHP5-FPM repeated cut outs 502

    - by James
    I've seen a number of questions here that highlight random 502 (Nginx + PHP-FPM = "Random" 502 Bad Gateway) and similar time outs when using Nginx + PHP-FPM. Even with all the questions, I'm still unable to find a solution. Using Ubuntu 10.10 + Nginx + PHP5-FPM + APC and every 1 out of 4 requests ends in a timeout and failure. This isn't a load issue or large traffic, it happens even in dev environment with one person. I am doing this across 3 1GB machines, each with the same configurations and same problems. fastcgi_params fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200; /etc/php5/fpm/main.conf ; FPM Configuration ; ;include=/etc/php5/fpm/*.conf ; Global Options ; pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log ;log_level = notice ;emergency_restart_threshold = 0 ;emergency_restart_interval = 0 ;process_control_timeout = 0 ;daemonize = yes ; Pool Definitions ; include=/etc/php5/fpm/pool.d/*.conf /etc/php5/fpm/pool.d/www.conf [www] listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data ;pm.max_children = 50 pm.max_children = 15 ;pm.start_servers = 20 pm.min_spare_servers = 5 ;pm.max_spare_servers = 35 pm.max_spare_servers = 10 ;pm.max_requests = 500 ;pm.status_path = /status ;ping.path = /ping ;ping.response = pong request_terminate_timeout = 30 ;request_slowlog_timeout = 0 ;slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes

    Read the article

  • How can I run Gnome or KDE locally in Cygwin?

    - by John Peter Thompson Garcés
    Apparently it is possible to do this using cygwin ports, as can be seen in screenshots. I followed this how-to to get apt-cygports set up, and I used it to install gnome-session. This how-to supposedly gives the commands needed to run Gnome or KDE, but whenever I try to run Gnome, a blank X-window pops up and then quickly disappears. Here is the terminal output: $ startx /usr/bin/dbus-launch gnome-session xauth: file /home/jpthomps/.serverauth.4168 does not exist Welcome to the XWin X Server Vendor: The Cygwin/X Project Release: 1.10.3.0 OS: Windows 7 Service Pack 1 [Windows NT 6.1 build 7601] (WoW64) Package: version 1.10.3-12 built 2011-08-22 XWin was started with the following command line: /usr/bin/X :0 -auth /home/jpthomps/.serverauth.4168 (II) xorg.conf is not supported (II) See http://x.cygwin.com/docs/faq/cygwin-x-faq.html for more information LoadPreferences: /home/jpthomps/.XWinrc not found LoadPreferences: Loading /etc/X11/system.XWinrc LoadPreferences: Done parsing the configuration file... winDetectSupportedEngines - DirectDraw installed, allowing ShadowDD winDetectSupportedEngines - Windows NT, allowing PrimaryDD winDetectSupportedEngines - DirectDraw4 installed, allowing ShadowDDNL winDetectSupportedEngines - Returning, supported engines 0000001f winSetEngine - Using Shadow DirectDraw NonLocking winScreenInit - Using Windows display depth of 32 bits per pixel winFinishScreenInitFB - Masks: 00ff0000 0000ff00 000000ff Screen 0 added at virtual desktop coordinate (0,0). MIT-SHM extension disabled due to lack of kernel support XFree86-Bigfont extension local-client optimization disabled due to lack of shared memory support in the kernel (II) AIGLX: Loaded and initialized /usr/lib/dri/swrast_dri.so (II) GLX: Initialized DRISWRAST GL provider for screen 0 winPointerWarpCursor - Discarding first warp: 637 478 (--) 5 mouse buttons found (--) Setting autorepeat to delay=500, rate=31 (--) Windows keyboard layout: "00000409" (00000409) "US", type 4 (--) Found matching XKB configuration "English (USA)" (--) Model = "pc105" Layout = "us" Variant = "none" Options = "none" Rules = "base" Model = "pc105" Layout = "us" Variant = "none" Options = "none" winBlockHandler - pthread_mutex_unlock() winProcEstablishConnection - winInitClipboard returned. winClipboardProc - DISPLAY=:0.0 winClipboardProc - XOpenDisplay () returned and successfully opened the display. xinit: XFree86_VT property unexpectedly has 0 items instead of 1 xinit: connection to X server lost waiting for X server to shut down winClipboardProc - winClipboardFlushWindowsMessageQueue trapped WM_QUIT message, exiting main loop. winClipboardProc - XDestroyWindow succeeded. winClipboardProc - Clipboard disabled - Exit from server winDeinitMultiWindowWM - Noting shutdown in progress

    Read the article

  • repair partition table

    - by m.sr
    Hallo. I've just overwritten my partition table of my system's hard disk. i made a cfdisk on the wrong device (/dev/sda instead of /dev/sdd), deleted all partitions, made one new primary spanning over the whole device, set its type to 07 (NTFS) and hit write. So here i am with my system running. Until i reboot, i hope/guess nothing will change - meaning: all my data is accessible (I'm currently making a dd-backup of the whole device and plan to make a .tar.gz-backup of the most important data later). I also backed up /proc/partitions, /proc/diskstats (even though i guess this is more about throughput and stuff like this ...) and /sys/block/sda/sda?/{start,size}. Some further things i know: 4 primary partitions 1st partition: ~100Mb, ext3, /boot 2nd partition: ~100Mb, "Win7 Boot Partition", ntfs(?) 3rd partition: ~20...30GB, Win7, ntfs 4th partition: ~20...30GB, luks-encrypted device The luks- de crypted device is a LVM-PV The /, /home & swap-partitions are all LVs on the (VG on the) above noted PV So my questions: What is the simplest way to just write the kernels partition table to the disk? What is the simplest way to take the above mentioned (and perhaps other I don't know of ...) data and generate the partition table? Are there any problems to take care of regarding to luks and/or lvm? Is there any data I should backup before rebooting (meanig stuff from kernel [ /sys/..., /proc/...] and so on, which could help me regenerate the partition table)? Thanks a lot! P.S.: debian sid, Kernel 2.6.34-1-amd64 from debian-experimental, 80GB Intel SSD

    Read the article

  • Excel 2007: Named ranges problems when linking workbooks

    - by Mike
    I've 30+ workbooks each with 5 specific worksheets (formated the same). Each worksheet's data needs to be linked to a master workbook, so that I end up with 5 master workbooks and all the specific data in one long table format $A$2:$I$750. (Are you still with me? ;)) I don't have access to a database, so I'm having to link the sheets to their master workbook directly. I've highlighted the data I need; named the range; and then tried referencing this from my master workbook. I get the #Value error symbol when I try to link (=[WorkbookName]!MyNamedRange) to a cell that doesn't match the top left cell of my range. Example: MyNamedrange is always =$A$2:$I43$ on one specific sheet. On my master workbook it works if it's referenced at A2 but I get #Value if it's referenced A1, or A44. Any ideas? I'm trying to link my data in one continous table so I can run a pivot on it, and other things. Can it be done like this, or should I just copy and paste? I'm trying to keep things 'linked'so I do not need to spend time C&Ping all day. Many thanks Mike.

    Read the article

  • Software mirroring (RAID1) versus "Fake Raid" for new Windows 7 install

    - by kquinn
    I've just ordered two new hard drives for my main desktop and a copy of Windows 7 Professional 64-bit. I'd like to do a clean install of Win7 onto the new drives (leaving my old XP Pro boot partition around for a while in case something goes disastrously wrong, etc.). I want to have them set up in mirrored (RAID-1) mode. My understanding is that Win7 Pro can do software mirroring, but can I set this up directly at install time? If so, how? Note that I'd like the disk to be split into three partitions (OS/Apps&Data/Bulk data), all of which should be mirrored. Would it be better (more reliable or faster) to use my motherboard's hardware RAID support? My motherboard is an older nVidia nForce 680i SLI, which is not the most stable of motherboards, and I'm not sure how trustworthy its RAID1 configuration might be (or if Win7 could even detect and install onto a hardware-mirrored volume). Also, the performance characteristics of RAID1 are rather different than RAID0 or RAID5, and I'm wondering if Win7's software mirroring might actually be faster than hardware RAID1 (for example, I'm more of a Unix admin when I have to wear the sysadmin hat, and I've had great success deploying ZFS; most hardware RAID1 implementations have to read both disks and compare results to look for data errors, but ZFS can read from only one disk in the mirror and just use the built-in checksum, meaning it can have up to 2x the number of reads in-flight, as long as there's no data corruption). Edit: Okay, my question about whether Windows 7 can do software mirroring has been answered, and it can. I'm still unsure whether Windows software RAID or my motherboard's hardware "fake RAID" function is a better choice, though. Remember, I'm only interested in mirroring -- not the more complicated striping or parity operations that generally show the poor performance of crappy motherboard RAID solutions.

    Read the article

  • 2008 Sever Randomly reboots.

    - by Jeff
    I'm out of ideas here. We have a 2008 Server that keeps rebooting 2-3 times a day at completely random times with an "Unexpected Shutdown" event. There are no Dumps, no events leading to it just like it loses power then comes back online. I ran a Diagnostic of the power supply and it has had continuous power for months. In addition, the temperature of the processors are maxing out at 40 degrees Celsius. Anyone have any ideas how to figure out why this is restarting all the time? This is a DMZed Web server so it doesn't do too much process wise. Here are the specs: Host Name: ~~~ OS Name: Microsoft Windows Server 2008 R2 Standard OS Version: 6.1.7600 N/A Build 7600 OS Manufacturer: Microsoft Corporation OS Configuration: Standalone Server OS Build Type: Multiprocessor Free Registered Owner: Windows User Registered Organization: Product ID: ~~~ Original Install Date: 5/27/2010, 4:25:47 PM System Boot Time: 2/14/2011, 5:35:01 PM System Manufacturer: HP System Model: ProLiant DL380 G6 System Type: x64-based PC Processor(s): 1 Processor(s) Installed. [01]: Intel64 Family 6 Model 26 Stepping 5 GenuineIntel ~1586 Mhz BIOS Version: HP P62, 8/16/2010 Windows Directory: C:\Windows System Directory: C:\Windows\system32 Boot Device: \Device\HarddiskVolume1 System Locale: en-us;English (United States) Input Locale: en-us;English (United States) Time Zone: (UTC-05:00) Eastern Time (US & Canada) Total Physical Memory: 4,086 MB Available Physical Memory: 2,775 MB Virtual Memory: Max Size: 8,170 MB Virtual Memory: Available: 6,691 MB Virtual Memory: In Use: 1,479 MB Page File Location(s): C:\pagefile.sys

    Read the article

  • Unable to write DVD-R(Blank DVD's)

    - by FrozenKing
    I have a problem in dvd drive i.e. It can read CD/DVD and can write CD and all CD/DVD-RW but cannot write DVD DVD drive model is SH-S203B Samsung; I also have a log file created by nero burning rom 11. Actually the fact is no Blank DVD's are being read in my dvd drive only previously written dvd's can be read! Is this the problem of OS or should I try cleaning the dvd drive or my DVD drive is 4yrs old so is it going to spoil now, since it is showing this type of symptoms! OS = WinXP AV = KIS 2012 DVD Drive = Samsung SH-S203B (Also tried latest firmware and downgrade versions also) IA32 Nero Version: 11.2.4.100 Internal Version: 11,2,4,100 Recorder: <TSSTcorp CDDVDW SH-S203B>Version: SB04 - HA 1 TA 0 - 11.2.4.100 Adapter driver: <Serial ATA> HA 1 Drive buffer : 2048kB Bus Type : via Inquiry data CD-ROM: <TSSTcorp CDDVDW SH-S203B >Version: SB04 - HA 1 TA 0 - 11.2.4.100 Adapter driver: <Serial ATA> HA 1 18:58:10 #37 SPTI -1511 File SCSIPassThrough.cpp, Line 224 CdRom0: SCSIStatus(x02) WinError(0) NeroError(-1511) CDB Data: 0x28 00 00 00 00 00 00 00 10 00 Sense Key: 0x04 (KEY_HARDWARE_ERROR) Sense Code: 0x3E Sense Qual: 0x02 Sense Area: 0x70 00 04 00 00 00 00 0A 00 00 00 00 3E 02 Buffer x08047340: Len x8000 18:58:10 #38 SectorVerify 20 File Cdrdrv.cpp, Line 12057 Read errors from sector 0 to 14 <Padding> 18:58:19 #39 SPTI -1511 File SCSIPassThrough.cpp, Line 224 CdRom0: SCSIStatus(x02) WinError(0) NeroError(-1511) CDB Data: 0x28 00 00 00 00 10 00 00 10 00 Sense Key: 0x04 (KEY_HARDWARE_ERROR) Sense Code: 0x3E Sense Qual: 0x02 Sense Area: 0x70 00 04 00 00 00 00 0A 00 00 00 00 3E 02 Buffer x08047340: Len x8000 18:58:19 #40 SectorVerify 21 File Cdrdrv.cpp, Line 12057 Read error at sector 15 <Virtual Multisession Info> 18:58:19 #41 SectorVerify 20 File Cdrdrv.cpp, Line 12057 Read errors from sector 16 to 18 <Volume Structure Descriptor Sequence> 18:58:28 #42 SPTI -1511 File SCSIPassThrough.cpp, Line 224 CdRom0: SCSIStatus(x02) WinError(0) NeroError(-1511) CDB Data: 0x28 00 00 00 00 20 00 00 10 00 Sense Key: 0x04 (KEY_HARDWARE_ERROR) Sense Code: 0x3E Sense Qual: 0x02 Sense Area: 0x70 00 04 00 00 00 00 0A 00 00 00 00 3E 02 Buffer x08047340: Len x8000

    Read the article

  • rsync general question

    - by CaptnLenz
    I'm trying to use rsync. At first, everything looks very good: rsync -Pniahv -e ssh /home/xxx/Videos/ [email protected]:"/shares/Public/Shared\ Videos/" --stats ... <f+++++++++ Serien/blah.avi <f+++++++++ Serien/blah S01E01 <f+++++++++ Serien/blah - S01E02 <f+++++++++ Serien/blah - S01E03 <f+++++++++ Serien/blah - S01E04 <f+++++++++ Serien/blah - S01E05 <f+++++++++ Serien/blah - S01E06 <f+++++++++ Serien/blah - S01E07 ... Number of files: 232 Number of files transferred: 223 Total file size: 118.24G bytes Total transferred file size: 117.51G bytes Literal data: 0 bytes Matched data: 0 bytes File list size: 9.46K File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 10.18K Total bytes received: 712 After that, i copied some of the files manually and runned rsync again in dry mode: rsync -Pniahv -e ssh /home/xxx/Videos/ [email protected]:"/shares/Public/Shared\ Videos/" --stats ... <f..tpo.... Serien/blah.avi <f..tpo.... Serien/blah S01E01 <f..tpo.... Serien/blah - S01E02 <f..tpo.... Serien/blah - S01E03 <f..tpo.... Serien/blah - S01E04 <f..tpo.... Serien/blah - S01E05 <f..tpo.... Serien/blah - S01E06 <f..tpo.... Serien/blah - S01E07 ... Number of files: 232 Number of files transferred: 223 Total file size: 118.24G bytes Total transferred file size: 117.51G bytes Literal data: 0 bytes Matched data: 0 bytes File list size: 9.46K File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 10.18K Total bytes received: 712 Why hasn't changed something in the --stats, although only the permissions and the timestamp have to be updated and not the full files need to be copied?

    Read the article

  • Script launching 3 copies of rsync

    - by organicveggie
    I have a simple script that uses rsync to copy a Postgres database to a backup location for use with Point In Time Recovery. The script is run every 2 hours via a cron job for the postgres user. For some strange reason, I can see three copies of rsync running in the process list. Any ideas why this might the case? Here's the cron entry: # crontab -u postgres -l PATH=/bin:/usr/bin:/usr/local/bin 0 */2 * * * /var/lib/pgsql/9.0/pitr_backup.sh And here's the ps list, which shows two copies of rsync running and one sleeping: # ps ax |grep rsync 9102 ? R 2:06 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log 9103 ? S 0:00 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log 9104 ? R 2:51 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log And here's the uber simple script that seems to be the cause of the problem: #!/bin/sh LOG="/var/log/pgsql-pitr-backup.log" base_backup_dir="/var/lib/pgsql/9.0/backups" wal_archive_dir="$base_backup_dir/wal_archives" pitr_archive_dir="$base_backup_dir/pitr_archives" timestamp=`date +%Y%m%d%H%M%S` backup_dir="$pitr_archive_dir/$timestamp" mkdir -p $backup_dir echo `date` >> $LOG /usr/bin/psql -U postgres -c "SELECT pg_start_backup('$backup_dir');" rsync -avW /var/lib/pgsql/9.0/data/ $backup_dir/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log /usr/bin/psql -U postgres -c "SELECT pg_stop_backup();"

    Read the article

  • Replacing all disks in a non-OS RAID 5 volume

    - by molecule
    Hi all, We currently have a server with 8 x HDD slots. It is a HP DL380G5 with a P400 controller. 2 x HDD are in a RAID 1+0 config and this hosts the OS. 6 x HDD are in a RAID 5 config and holds an Oracle DB. Basically the RAID 5 volume is running out of space and we would like to swap all 6 with higher capacity disks. Excuse my ignorance as I am pretty new to this... I believe we will need to backup the data, delete the RAID volume, insert the new disks, recreate the volume, and restore the data. 2 questions: Do we need to worry about the OS partition or is it completely independent so we can simply take out the 6 and insert 6 new disks and get the controller to recognize the 6 new disks and form a new RAID 5 volume? We should not need to reinstall OS or Oracle correct? Since we are going to restore the data on the volume from another source (our vendor will take care of this) but we would like to keep the existing data on the 6 disks just in case we run into issues and want to fall back, is this possible? Thanks in advance.

    Read the article

  • Permission issue for apache

    - by Aamir Adnan
    Environment Details: Amazon Ec2 Ubuntu 12.04 Django + mod_wsgi + python 2.6 web server: apache2 I have mounted a 10GB ebs volume to an instance to /mnt/ebs1/. After mounting the volume and formatting, I have placed all my project files in /mnt/ebs1/project. the wsgi file is in /mnt/ebs1/project/apache/django.wsgi. The content of wsgi file is: import os, sys sys.path.insert(0, '/mnt/ebs1/project') sys.path.insert(1, '/mnt/ebs1') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.configs.common.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() My httpd.conf file looks as: LoadModule wsgi_module /usr/lib/apache2/modules/mod_wsgi.so WSGIPythonHome /usr/bin/python2.6 WSGIScriptAlias / /mnt/ebs1/project/apache/django.wsgi <Directory /mnt/ebs1/project> Order allow,deny Allow from all </Directory> <Directory /mnt/ebs1/project/apache> Order allow,deny Allow from all </Directory> Alias /static/ /mnt/ebs1/project/static/ <Directory /mnt/ebs1/project/static> Order deny,allow Allow from all </Directory> The above configurations gives me Forbidden: You don't have permission to access / on this server. I tried to find the user which is running apache using ps aux which is www-data and has group www-data. I have tried to change the ownership of /mnt/ebs1 and its subdirectories using chown -R www-data:www-data /mnt/ebs1 but that still does not solve the problem. Can any one tell me what I am doing wrong or have missed?

    Read the article

  • Excael 2007: Name range problems when linking workbooks

    - by Mike
    I've 30+ workbooks each with 5 specific worksheets (formated the same). Each worksheet's data needs to be linked to a master workbook, so that I end up with 5 master workbooks and all the specific data in one long table format $A$2:$I$750. (Are you still with me? ;)) I don't have access to a database, so I'm having to link the sheets to their master workbook directly. I've highlighted the data I need; named the range; and then tried referencing this from my master workbook. I get the #Value error symbol when I try to link (=[WorkbookName]!MyNamedRange) to a cell that doesn't match the top left cell of my range. Example: MyNamedrange is always =$A$2:$I43$ on one specific sheet. On my master workbook it works if it's referenced at A2 but I get #Value if it's referenced A1, or A44. Any ideas? I'm trying to link my data in one continous table so I can run a pivot on it, and other things. Can it be done like this, or should I just copy and paste? I'm trying to keep things 'linked'so I do not need to spend time C&Ping all day. Many thanks Mike.

    Read the article

< Previous Page | 905 906 907 908 909 910 911 912 913 914 915 916  | Next Page >