Search Results

Search found 18791 results on 752 pages for 'edit lopez'.

Page 509/752 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • PXE booting LACP hosts on Force10 S50N with FTOS

    - by lolwutreddit
    Hardware: S50N Firmware: FTOS 8.4.2.6 Problem: We're trying to PXE boot some servers that are connected via port-channel interfaces with LACP. Current Work-around: we PXE boot a server with a single interface (eth0), and then use a Perl script to turn up the port-channel interfaces after the server is built. Details: Is anyone doing anything similar on Force10 S50 switches with FTOS? If not, is anyone doing this on another S series, or larger chassis-based Force10? I'm wondering if Native VLAN will solve this, since ports in a port-channel cannot explicitly have a VLAN set, and they don't seem to use the tagged or untagged VLAN that the port channel is in. I will confirm this next (I think it's the only thing I haven't tried) Juniper Example: http://broken.net/openindiana/how-to-pxe-boot-systems-on-lacp-using-juniper-switches/ Cisco: there are plenty of documented ways to solve this issue on IOS and Nexus Update/Edit: since there seems to be no way to use interface or port-channel mode commands to get the individual interfaces to show up in spanning-tree (rtsp in this case), the ports should never go into a forwarding state. I'm not going to mess with it anymore unless a) someone that has experience passes it on, or b) Force10 comes up with a solution for this (I'm guessing it will only be introduced on other S platforms (S55, S60), since the S50 seems to be near EOL). I'm basing that on the fact that the Open Automation type features are only being supported on the newer switches.

    Read the article

  • Black screen appears when booting new install of Ubuntu 11.10 on my desktop, cannot access Grub menu to fix

    - by izn
    I installed 11.10 on my desktop PC but get a black screen after the BIOS screen when I try to boot it. I was able to run 10.04.04 on my hard drive before installing 11.10 and I am also able to use 11.10 on my usb pendrive and CD ROM. I've tried unplugging all USB devices before booting and also upgrading from 11.10 to 11.10. Holding the shift key from the BIOS screen doesn't allow me to access the GRUB menu to try: Highlight the first entry, press “e” to edit it. Navigate to words “quiet splash”, delete them and type “nomodeset” in their place (without quotes). Press Ctrl + X to continue boot. Once on the desktop, go to System Administration Additional Drivers and activate the recommended drivers. So running 11.10 on my pendrive, I tried editing /etc/default/grub, commenting out the GRUB_HIDDEN_TIMEOUT setting by putting a '#' in front of it to display the grub menu and setting GRUB_TIMEOUT setting to a value greater than or equal to 1 e.g. GRUB_TIMEOUT=10. However, when I run sudo update-grub, I get: /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?) I get the same error with update-grub after: sudo mount /dev/sda1 /mnt and after: sudo grub-install --root-directory=/mnt /dev/sda reboot sudo update-grub Other suggestions to fix the update-grub problem: Open synaptic, then purge all the related grub installed packages and reinstall grub-pc then and finally: sudo update-grub Or use Grub Customizer http://ubuntuforums.org/showthread.php?t=1195275 What would be the best way to approach this? I'm concerned about purging "all the related grub installed packages" but if it's true some files are corrupted this would seem necessary. Also, was I executing the correct commands i.e. with mount and grub-install, before running grub-update?

    Read the article

  • Computer specs for a large database

    - by SpeksETC
    What sort of computer specs (CPU, RAM, disk speed) should I use for running queries on a database of 200+ million records? The queries are for a research project, so there is only one "user" and only one query will be running at a time. I tried it on my own laptop with SQL Server with an i3 processor, 2GB RAM, 5400 RPM disk and a simple query didn't finish even after 8+ hours. I have an option to connect a SSD via eSata and upgrade to 4GB RAM, but not sure if this will be enough... Thanks! Edit: The database is about 25 GB and the indexes are not setup properly. When I tried to add an index, I let it run for about 8 hours and it still hadn't finished so I gave up. Should I have more patience :)? In general, the queries will run once in a while and its ok even if it takes a couple hours to complete.... Also, the queries will produce probably about 10 million records which I need to process using Stata/Matlab and I'm concerned that my current laptop is not strong enough, but unsure of the bottleneck....

    Read the article

  • Problem running mysql client, cannot connect to mysql server

    - by ehsanul
    Edit3: Thanks for the help everyone. Sorry for wasting anybody's time, but it seems like a simple reboot solved it. I should've known better, but I just had the assumption that the "restart" solution is mostly valid just for MS Windows (no offense). I'll keep this in mind before I ask a question here again. I installed the mysql-client-5.0 and mysql-server-5.0 packages on Ubuntu 8.04, using sudo apt-get install. When I try to run the "mysql" command, I get the following error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) To verify that mysql server is running, I tried this, and it does seem to be running, with the correct socket too: $ ps aux | grep mysql root 13388 0.0 0.0 1772 528 ? S 06:24 0:00 /bin/sh /usr/bin/mysqld_safe mysql 13553 0.0 1.4 127012 15332 ? Sl 06:25 0:00 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --port=3306 --socket=/var/run/mysqld/mysqld.sock root 13555 0.0 0.0 3008 696 ? S 06:25 0:00 logger -p daemon.err -t mysqld_safe -i -t mysqld ehsanul 16910 0.0 0.0 3092 772 pts/4 R+ 07:17 0:00 grep mysql So I don't understand why I'm getting an error trying to connect to mysql server. Note that I'm completely new to mysql. Edit: As requested in comments, the exact command that is returning the error is simply "sudo mysql". And when I check netstats for active networks services, I do see an entry for port 3306, with Protocol: tcp, IP Source: 127.0.0.1, State: LISTEN Edit2: It appears as if the /var/run/mysqld/mysqld.sock socket doesn't exist (if I'm interpreting the following output correctly): $ ls -al /var/run/mysqld/ total 0 drwxr-xr-x 2 mysql root 40 2009-08-06 06:36 . drwxr-xr-x 20 root root 860 2009-08-06 06:25 ..

    Read the article

  • Ubuntu 12.4 compiz - disable all compiz plugin - empty screen

    - by gotqn
    A friend of mine has installed on my new machine Ubuntu 10.4 (I have always been windows user and have no experience with Linux). I started to watch some tutorial about how to make 'Rotated Cube' using 'Compiz',but the cube appears in the form of a list (only two slides). I have thought this could be result of my video cards (only two - one from the processor and one from the motherboard) and they can not support this options. Anyway, I have decided to disable all compiz plugins and options because my friend has set some, and I started to think there is some misunderstanding between the plugins. After, that I got only empty screen(no menu, no icons, anything) and can do nothing. How to fix this? EDIT: When I remove the compiz stuffs (from the console), the menu is shown again. Then I install the compiz again (some of the effect are still not working). After restart or log out/in the menu is hidden again. I suppose that there are some settings that I've broken but they are saved somewhere in the system and remove the compiz do not deleted them and as a result they are activated after compiz is installed again and the PC is restarted?

    Read the article

  • IIS 7 rewriting subdomain to point at a specific port

    - by Tommy Jakobsen
    Having installed Team Foundation Server 2010 on Windows Server 2008, I need an easy URL for our developers to access their repositories. The default URL for the TFS repositories is http://localhost:8080/tfs Now I want the subdomain domain tfs.server.domain.com to point at http://localhost:8080/tfs. And when you access tfs.server.domain.com/repos_name it should redirect to http://localhost:8080/tfs/repos_name. How can I do this in IIS7? I already tried using the following rule, but it does not work. I get a 404. <rewrite> <globalRules> <rule name="TFS" stopProcessing="true"> <match url="^(?:tfs/)?(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^tfs.server.domain.com$" /> </conditions> <action type="Rewrite" url="http://localhost:8080/tfs/{R:1}" /> </rule> </globalRules> </rewrite> EDIT I actually got this working by adding a binding for the site on port 80 with host name tfs.server.domain.com. But using tfs.server.domain.com, I can't authenticate using Windows Authentication. Is there something that I need to configure for Windows Authentication? You can see a trace here: http://pastebin.com/k0QrnL0m

    Read the article

  • Postfix installation error on Ubuntu

    - by kgpdeveloper
    How do I fix this error on Ubuntu 10.04 ? Reading package lists... Done Building dependency tree Reading state information... Done postfix is already the newest version. The following packages were automatically installed and are no longer required: libaprutil1-dbd-sqlite3 libcap2 apache2.2-bin libapr1 libaprutil1-ldap libaprutil1 php5-common Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 1 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up postfix (2.7.0-1) ... Postfix configuration was not changed. If you need to make changes, edit /etc/postfix/main.cf (and others) as needed. To view Postfix configuration values, see postconf(1). After modifying main.cf, be sure to run '/etc/init.d/postfix reload'. Running newaliases newaliases: warning: valid_hostname: numeric hostname: 202002 newaliases: fatal: file /etc/postfix/main.cf: parameter myhostname: bad parameter value: 202002 dpkg: error processing postfix (--configure): subprocess installed post-installation script returned error exit status 75 Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: postfix E: Sub-process /usr/bin/dpkg returned an error code (1) Even if I reboot, the same error shows up. Thanks for the help..

    Read the article

  • MySQL tmpdir on /dev/shm with SELinux

    - by smorfnip
    On RHEL5, I have a small MySQL database that has to write temp files. To speed up this process, I would like to move the temporary directory to /dev/shm by putting the following line into my.cnf: tmpdir=/dev/shm/mysqltmp I can create /dev/shm/mysqltmp just fine and do chown mysql:mysql /dev/shm/mysqltmp chcon --reference /tmp/ /dev/shm/mysqltmp I've tried to make SELinux happy by applying the same settings that are in effect for /tmp/ (and /var/tmp/), which is presumably where MySQL is writing its tmp files if tmpdir is undefined. The problem is that SELinux complains about MySQL having access to that directory. I get the following in /var/log/messages: SELinux is preventing mysqld (mysqld_t) "getattr" to /dev/shm (tmpfs_t). SELinux is a hard mistress. Details: Source Context root:system_r:mysqld_t Target Context system_u:object_r:tmpfs_t Target Objects /dev/shm [ dir ] Source mysqld Source Path /usr/libexec/mysqld Port <Unknown> Host db.example.com Source RPM Packages mysql-server-5.0.77-3.el5 Target RPM Packages Policy RPM selinux-policy-2.4.6-255.el5_4.1 Selinux Enabled True Policy Type targeted MLS Enabled True Enforcing Mode Enforcing Plugin Name catchall_file Host Name db.example.com Platform Linux db.example.com 2.6.18-164.2.1.el5 #1 SMP Mon Sep 21 04:37:42 EDT 2009 x86_64 x86_64 Alert Count 46 First Seen Wed Nov 4 14:23:48 2009 Last Seen Thu Nov 5 09:46:00 2009 Local ID e746d880-18f6-43c1-b522-a8c0508a1775 ls -lZ /dev/shm shows drwxrwxr-x mysql mysql system_u:object_r:tmp_t mysqltmp and permissions for /dev/shm itself are drwxrwxrwt root root system_u:object_r:tmpfs_t shm I've also tried chcon -R -t mysqld_t /dev/shm/mysqltmp and setting the group on /dev/shm to mysql with no better results. Shouldn't it be enough to tell SELinux, hey, this is a temp directory just like MySQL was using before? Short of turning off SELinux, how do I make this work? Do I need to edit SELinux policy files?

    Read the article

  • Cannot run logwatch due to Date::Manip issue

    - by Quintin Par
    I tried to run logwatch at follows [root@machine cron.daily]# ./0logwatch ERROR: Date::Manip unable to determine TimeZone. Execute the following command in a shell prompt: perldoc Date::Manip The section titled TIMEZONES describes valid TimeZones and where they can be defined. My date is as follows root@machine cron.daily]# date Thu Aug 23 06:25:21 GMT 2012 Now based on details in various forums I tried to fix this by setting /etc/timezone to “+0800” but it didn’t work My /etc/localtime points to /usr/share/zoneinfo/GMT and is managed by puppet How do I go about fixing this? I still want all my machines to be in GMT timezone. EDIT: Sadly, Both the changes are not working: [root@machine cron.daily]# cat /etc/TIMEZONE UTC Quanta’s [root@machine cron.daily]# cat ~/.bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export TZ=GMT export PATH [root@machine cron.daily]# source ~/.bash_profile [root@machine cron.daily]# ./0logwatch ERROR: Date::Manip unable to determine TimeZone. Execute the following command in a shell prompt: perldoc Date::Manip The section titled TIMEZONES describes valid TimeZones and where they can be defined.

    Read the article

  • stunnel client uses improper SNI when talking to Apache

    - by Huckle
    I have stunnel listening on port 80 and acting as a client connecting to Apache listening on port 443. Configuration is below. What I'm finding is that if I attempt to connect to localhost:80 the connection is fine but if I connect to 127.0.0.1:80 When I check Apache's logs it indicates that stunnel is using localhost as the SNI both times, but the HTTP request lists localhost in one case and 127.0.0.1 in another. Is it possible to tell stunnel to either use whatever is in the HTTP request or to somehow configure two clients each with different SNI values? stunnel.conf: debug = 7 options = NO_SSLv2 [xmlrpc-httpd] client = yes accept = 80 connect = 443 Apache error.log: [error] Hostname localhost provided via SNI and hostname 127.0.0.1 provided via HTTP are different Apache access.log: "GET / HTTP/1.1" 200 2138 "-" "Wget/1.13.4 (linux-gnu)" "GET / HTTP/1.1" 400 743 "-" "Wget/1.13.4 (linux-gnu)" wget: $wget -d localhost ---request begin--- GET / HTTP/1.1 User-Agent: Wget/1.13.4 (linux-gnu) Accept: */* Host: localhost Connection: Keep-Alive ---request end--- $wget -d 127.0.0.1 ---request begin--- GET / HTTP/1.1 User-Agent: Wget/1.13.4 (linux-gnu) Accept: */* Host: 127.0.0.1 Connection: Keep-Alive ---request end--- edit: Apache Config Nothing out of the ordinary, it's just a virtual host listening to 443 <VirtualHost *:443>

    Read the article

  • ZFS - Impact of L2ARC cache device failure (Nexenta)

    - by ewwhite
    I have an HP ProLiant DL380 G7 server running as a NexentaStor storage unit. The server has 36GB RAM, 2 LSI 9211-8i SAS controllers (no SAS expanders), 2 SAS system drives, 12 SAS data drives, a hot-spare disk, an Intel X25-M L2ARC cache and a DDRdrive PCI ZIL accelerator. This system serves NFS to multiple VMWare hosts. I also have about 90-100GB of deduplicated data on the array. I've had two incidents where performance tanked suddenly, leaving the VM guests and Nexenta SSH/Web consoles inaccessible and requiring a full reboot of the array to restore functionality. In both cases, it was the Intel X-25M L2ARC SSD that failed or was "offlined". NexentaStor failed to alert me on the cache failure, however the general ZFS FMA alert was visible on the (unresponsive) console screen. The zpool status output showed: pool: vol1 state: ONLINE scan: scrub repaired 0 in 0h57m with 0 errors on Sat May 21 05:57:27 2011 config: NAME STATE READ WRITE CKSUM vol1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c8t5000C50031B94409d0 ONLINE 0 0 0 c9t5000C50031BBFE25d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c10t5000C50031D158FDd0 ONLINE 0 0 0 c11t5000C5002C823045d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c12t5000C50031D91AD1d0 ONLINE 0 0 0 c2t5000C50031D911B9d0 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 c13t5000C50031BC293Dd0 ONLINE 0 0 0 c14t5000C50031BD208Dd0 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 c15t5000C50031BBF6F5d0 ONLINE 0 0 0 c16t5000C50031D8CFADd0 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 c17t5000C50031BC0E01d0 ONLINE 0 0 0 c18t5000C5002C7CCE41d0 ONLINE 0 0 0 logs c19t0d0 ONLINE 0 0 0 cache c6t5001517959467B45d0 FAULTED 2 542 0 too many errors spares c7t5000C50031CB43D9d0 AVAIL errors: No known data errors This did not trigger any alerts from within Nexenta. I was under the impression that an L2ARC failure would not impact the system. But in this case, it surely was the culprit. I've never seen any recommendations to RAID L2ARC. Removing the bad SSD entirely from the server got me back running, but I'm concerned about the impact of the device failure (and maybe the lack of notification from NexentaStor as well). Edit - What's the current best-choice SSD for L2ARC cache applications these days?

    Read the article

  • Whitelist IP from google-authenticator in sshd pam

    - by spudwaffle
    My Ubuntu 12.04 server uses the google-authenticator pam module to provide two step authentication for ssh. I need to make it so that a certain IP does not need to type the verification code. The /etc/pam.d/sshd file is below: # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password auth required pam_google_authenticator.so I've already tried adding a auth sufficient pam_exec.so /etc/pam.d/ip.sh line above the google-authenticator line, but I can't understand how to check an IP adress in the bash script.

    Read the article

  • Use synergy with Physical KVM

    - by Mr. Man
    I am using synergy on a Linux Mint computer as the server with a Mac as the client. I also have a physical KVM switch. The problem I have is that when ever I switch the physical KVM to my Mac, synergy stops working as in the keyboard and mouse don't work with the Mac. Thanks in advance! EDIT: here are some logs: From the Mint machine: INFO: synergys.cpp,1042: Synergy server 1.3.1 on Linux 2.6.31-14-generic #48-Ubuntu SMP Fri Oct 16 14:04:26 UTC 2009 i686 DEBUG: synergys.cpp,1051: opening configuration synergy.conf DEBUG: synergys.cpp,1062: configuration read successfully DEBUG: CXWindowsScreen.cpp,847: XOpenDisplay(:0.0) DEBUG: CXWindowsScreenSaver.cpp,339: xscreensaver window: 0x00000000 DEBUG: CXWindowsScreen.cpp,117: screen shape: 0,0 1024x768 DEBUG: CXWindowsScreen.cpp,118: window is 0x03800004 DEBUG: CScreen.cpp,38: opened display DEBUG: CXWindowsScreen.cpp,679: registered hotkey F12 (id=efc9 mask=0000) as id=1 NOTE: synergys.cpp,500: started server INFO: CServer.cpp,1141: screen ubuntu shape changed NOTE: CClientListener.cpp,127: accepted client connection DEBUG: CClientProxy1_0.cpp,404: received client marks-mac.local info shape=-1024,0 2304x800 NOTE: CServer.cpp,278: client mac has connected INFO: CServer.cpp,447: switch from ubuntu to mac at -1024,393 INFO: CScreen.cpp,116: leaving screen DEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDEBUG: CXWinavDEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDEBUG302)DEBUG: CXWindowsClipboard.cDEBUG: CXWindowsClipboard.cDE47DEBUG: CXWindowsClipboard.cDEBUG: CXWindowsrset=utf-8 (633), text/plain (462) DEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipboard.cpp,555: added f DEBUG: CXWindCXWDEBUG: CXWindowsClipboard.cpp,555: added fDEBUG:SerDEBUG: CXWindowsClipboard.cpp,555: ed DEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipboard.cpp,555owsClDEBUG: CXWindowsClipboard.cpp,555: 1DEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: getDEBUG: CXWindowsClipboard.cpp,555: added f DEBUG: CXW8_STDEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipboard.cpp,555: added fD textDEBUG: CXWindowsClipboard.cpp,555: added fDEBU DEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipinDEBUG: CXWindowsClipboard.cpp,555:oardDEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindCXWDEBUG: CXWindowsClipboard.cpp,555: added fDEBUG:SerDEBUG: CXWindowsClipboard.cpp,555: ed DEBUG: CXWindowsClipboard.cpp,555: added fDEBUG: CXWindowsClipboard.cpp,555owsClDEBUG: CXWindowsClipboard.cpp,555: 1DEBUG: CXWindowsClipboard.cpp, s From the Mac: connecting to '192.168.3.5': 192.168.3.5:24800 connected to server entering screen leaving screen entering screen leaving screen stopped client

    Read the article

  • SOGo installation on Mail Server

    - by i.h4d35
    We run a normal mail server on cPanel for web-based email. We've just got a request to add Calendar, address book, tasks functions; mobile capabilities (I'm guessing acces via a mobile client/app); public folders etc. On the client-side, we have some people using webmail, some use MS Outlook and some others use Mozilla Thunderbird. Having looked around, I zeroed in on SOGo, Citadel and kolab as options for this. I read through SOGo's official install guide and also checked here and here. However, I see most of the HowTo's ask installation of MySQL/PgSQL, LDAP, Samba etc. While I can manage installation of Samba (if required), I have no idea if installing LDAP, MySQL etc is really required. Also, any guidance as to how to install on a regular mail server would be appreciated. Sorry if this sounds vague. If any more information is required, I'll be happy to give it. Thanks in advance. Edit: This server in question has always been governed via cPanel (to install PHP, MySQL, configure DNS etc). So I am confused if really need LDAP.

    Read the article

  • Alternatives to Splunk?

    - by MichaelGG
    I'm pretty impressed with Splunk, especially version 4. Pretty graphs, alerting (Enterprise only), and fast, accurate, searching. It's a great product. However, the cost just way too high to consider for full production use for our company. All we really need is to be able to index different logs in a central place, and have reasonable searching on that. Having alerts based on a saved search is also really nice. We don't really go beyond that. In fact, our biggest usage has been in deploying new applications. Everything gets logged via log4net to either the Event log on Windows or a text file on Linux. Splunk makes it pretty easy to quickly search across those to make sure all the parts of the app are working ok -- that's saved us tons of time versus hunting down individual logging sources. What alternatives exist in this market? I have a sinking feeling Splunk's pricing is so high because they have the best product by far, and they know it. We want the server to run on Windows. I'd be open to a split model, using one product for general logs (collect via syslog/Snare), and a dedicated product for our custom apps (like Log4Net Dashboard). Would using a simple syslog server such as Kiwi, sent to SQL Server (perhaps with fulltext enabled) work? I'd hope the cost should be well under 5 figures, USD. (And yes, I know, we're cheap. We're a startup with little money, and BizSpark takes care of all our MS licensing.) Edit: I should add, we have about 10 physical servers, 20 VMs, and a couple firewalls and switches. 90% is Windows.

    Read the article

  • Short POST data in HTTP

    - by Matt
    We're hosting a customer's Debian Linux web server. It's running a PHP based web application. The server is sitting behind our firewall with it's own virtual interface and port 80 is forwarded internally to a machine sitting in the DMZ. The issue we're having is that when data is posted to the server it seems to be being cut short for some users. It's reproducable for some users on the same box. But the same user sending the same data on the same lan on another PC it works. The data gets cut to around 1140 bytes I'm told. Any idea why this might be happening? The customer is blaming our firewall, but then surely we'd have issues with other services. I'm suspecting it's a problem with the website itself. Suggestions on how to isolate the problem would be of help. Our firewall is Astaro. EDIT: A customer has set the ethernet frame size temporarily to 500bytes on the server. This made it work for now! I know some of the customers are using an internet provider that runs PPPoE

    Read the article

  • Blank desktop when logging into a Virtualized Windows 2008 Terminal Server?

    - by Rachel
    We have a Virtualized Terminal Server running Windows Server 2008. When the admin user logs in, everything is fine. When anyone else logs in, their desktop and start menu is blank (they have the taskbar, start button, and quick launch links though). If I go into Windows Explorer, I can see icons in their desktop folder (although the icon image is missing and it is just displaying the generic icon), but can't run any of them. If I login with a user that is part of the Administrator group in Active directory, I get the same behavior except I can launch the programs found in the Desktop Folder of Windows Explorer. I cannot drag these items out onto the desktop though - The cursor doesn't allow me to drop them. From Task Manager I can see that explorer.exe and dwm.exe are both running. The Authenticated Users and Interactive groups are both under the Users group, along with our network's Domain Users group. Does anyone know why this is happening and how I can fix it? Also, not sure if it's related but about 1 in every 3 logins just hangs at a completely blank blue screen (no start button, taskbar, or quick launch buttons) and needs to be disconnected / reset by an admin. Edit I just noticed that the desktop itself doesn't even respond to click events. It's almost like the entire desktop is missing. At first I thought it didn't respond to right-click events because of an AD policy, but then I noticed if you open the Start Menu and click the desktop, the start menu doesn't shut like it should

    Read the article

  • How to setup Calendar permissions for group to group

    - by Sorean
    I've been scouring the internet and so far have only been able to find examples of how to grant calendar permissions from one user to another using the Add-MailboxFolderPermission command. This is great and it was okay for when they only had a handful of users. But going forward it's not realistic to have to set individual calendar permissions for all calendars for each new user. Layout of security groups already created. Each group has a few people assigned to it. Techs Managers Admin What I am trying to accomplish is set it up so that anyone that belongs to the Managers group can view the calendars of the Tech group. Admins can view and edit the Tech group. I've found an example of adding just the security group name but I get an error of: [PS] C:\Windows\system32add-MailboxFolderPermission -Identity Techs:\Calendar -User "Admin" -AccessRights Owner The user "Admin" is either not valid SMTP address, or there is no matching information. + CategoryInfo : NotSpecified: (0:Int32) [Add-MailboxFolderPermission], InvalidExternalUserIdException + FullyQualifiedErrorId : 39352699,Microsoft.Exchange.Management.StoreTasks.AddMailboxFolderPermission Am I creating groups wrong? Am I using the wrong commands? Any guidance would be greatly appreciated.

    Read the article

  • Need help with some IIS7 web.config compression settings.

    - by Pure.Krome
    Hi folks, I'm trying to configure my IIS7 compression settings in my web.config file. I'm trying to enable HTTP 1.0 requests to be gzip. MSDN has all the info about it here. Is it possible to have this config info in my own website's web.config file? Or do i need to set it at an application level? Currently, I have that code in my web.config... <system.webServer> <urlCompression doDynamicCompression="true" dynamicCompressionBeforeCache="true" /> <httpCompression cacheControlHeader="max-age=86400" noCompressionForHttp10="False" noCompressionForProxies="False" sendCacheHeaders="true" /> ... other stuff snipped ... </system.webServer> It's not working :( HTTP 1.1 requests are getting compressed, just not 1.0. That MSDN page above says that it can be used in :- Machine.config ApplicationHost.config Root application Web.config Application Web.config Directory Web.config So, can we set these settings on a per-website-basis, programatically in a web.config file? (this is an Application Web.config file...) What have i done wrong? cheers :) EDIT: I was asked how i know HTTP1.0 is not getting compressed. I'm using the Failed Request Tracing Rules, which reports back:- DYNAMIC_COMPRESSION_START DYNAMIC_COMPRESSION_NOT_SUCESS Reason: 3 Reason: NO_COMPRESSION_10 DYNAMIC_COMPRESSION_END

    Read the article

  • What open source ecommerce webshops offer #1: usability, #2: PayPal integration, and #3: ease of administration and use

    - by Jonathan Hayward
    I've spent several days trying to deploy Satchmo, in the process asking several questions about deployment (http://stackoverflow.com/questions/11277407/can-anyone-explain-this-error-message-deploying-a-satchmo-project-under-gunicorn, http://stackoverflow.com/questions/11277685/is-there-a-howto-to-fcgi-for-deploying-satchmo, and http://stackoverflow.com/questions/11278295/what-is-the-most-stable-release-of-satchmo). Django's tagline is "The web framework for perfectionists with deadlines," and Satchmo's tagline is equally forceful: "The webshop for perfectionists with deadlines." I'm looking more to set up, configure, design, etc., rather than code for this one, and I'm taking a bit of a hint that for me at least the "with deadlines" bit is something that I cannot manage. Deployment has been a time sink. So, taking a step back, I don't specifically need to edit and extend the source code; what I want are first, good usability and a clean experience for the end-user, then being easy to deploy/install/manage/maintain, and enough so that even if you're having a slow day it should at most be one day's work to install, one day's work to get running, and one day's work to rebrand as white label (for simple branding). What ecommerce webshops should I be looking at?

    Read the article

  • Send mail on event log error trigger safe check frequency

    - by Zeb Rawnsley
    I want to use powershell to alert me when an error occurs in the event viewer on my new Win2k12 Standard Server, I was thinking I could have the script execute every 10mins but don't want to put any strain on the server just for event log checking, here is the powershell script I want to use: $SystemErrors = Get-EventLog System | Where-Object { $_.EntryType -eq "Error" } If ($SystemErrors.Length -gt 0) { Send-MailMessage -To "[email protected]" -From $env:COMPUTERNAME + @company.co.nz" -Subject $env:COMPUTERNAME + " System Errors" -SmtpServer "smtp.company.co.nz" -Priority High } What is a safe frequency I can run this script at without hurting my server? Hardware: Intel Xeon E5410 @ 2.33GHz x2 32GB RAM 3x 7200RPM S-ATA 1TB (2x RAID1) Edit: With the help of Mathias R. Jessen's answer, I ended up attaching an event to the application & system log with the following script: Param( [string]$LogName ) $ComputerName = $env:COMPUTERNAME; $To = "[email protected]" $From = $ComputerName + "@company.co.nz"; $Subject = $ComputerName + " " + $LogName + " Error"; $SmtpServer = "smtp.company.co.nz"; $AppErrorEvent = Get-EventLog $LogName -Newest 1 | Where-Object { $_.EntryType -eq "Error" }; If ($AppErrorEvent.Length -eq 1) { $AppErrorEventString = $AppErrorEvent | Format-List | Out-String; Send-MailMessage -To $To -From $From -Subject $Subject -Body $AppErrorEventString -SmtpServer $SmtpServer -Priority High; };

    Read the article

  • Ubuntu raid 1 write errors

    - by Micah
    I have an Ubuntu server set up with two SATA drives in a RAID 1 configuration with MDADM. The machine is used to record raw video, which involves a lot of writing to the disk. Sometimes during video recording the computer will crash, will the following errors in kern.log: Mar 15 10:39:41 video kernel: [414501.629864] ata2.00: exception Emask 0x10 SAct 0x0 SErr 0x400100 action 0x6 Mar 15 10:39:41 video kernel: [414501.629870] ata2.00: BMDMA stat 0x26 Mar 15 10:39:41 video kernel: [414501.629875] ata2.00: SError: { UnrecovData Handshk } Mar 15 10:39:41 video kernel: [414501.629880] ata2.00: failed command: WRITE DMA EXT Mar 15 10:39:41 video kernel: [414501.629889] ata2.00: cmd 35/00:00:28:6d:f6/00:04:06:00:00/e0 tag 0 dma 524288 out Mar 15 10:39:41 video kernel: [414501.629891] res 51/84:b1:77:6e:f6/84:02:06:00:00/e0 Emask 0x30 (host bus error) Mar 15 10:39:41 video kernel: [414501.629896] ata2.00: status: { DRDY ERR } Mar 15 10:39:41 video kernel: [414501.629899] ata2.00: error: { ICRC ABRT } Mar 15 10:39:41 video kernel: [414501.629910] ata2.00: hard resetting link Mar 15 10:39:41 video kernel: [414501.973009] ata2.01: hard resetting link Mar 15 10:39:41 video kernel: [414502.482642] ata2.00: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Mar 15 10:39:41 video kernel: [414502.482658] ata2.01: SATA link down (SStatus 0 SControl 300) Mar 15 10:39:41 video kernel: [414502.546160] ata2.00: configured for UDMA/133 Mar 15 10:39:41 video kernel: [414502.546203] ata2: EH complete Is this the result of faulty drives? Is software RAID just not performant enough for data rates ~15 MB/s, even with a quad-core i7? Thanks for your help. Edit: cat /proc/mdstat returns this: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdb1[1] sda1[0] 976760768 blocks [2/2] [UU] unused devices: <none>

    Read the article

  • Authenticate users with Zimbra LDAP Server from other CentOS clients

    - by efesaid
    I'am wondering that how can integrate my database,web,backup etc.. centos servers with Zimbra LDAP Server. Does it require more advanced configuration than standart ldap authentication ? My zimbra server version is [zimbra@zimbra ~]$ zmcontrol -v Release 8.0.5_GA_5839.RHEL6_64_20130910123908 RHEL6_64 FOSS edition. My LDAP Server status is [zimbra@ldap ~]$ zmcontrol status Host ldap.domain.com ldap Running snmp Running stats Running zmconfigd Running I already installed nss-pam-ldapd packages to my servers. [root@www]# rpm -qa | grep ldap nss-pam-ldapd-0.7.5-18.2.el6_4.x86_64 apr-util-ldap-1.3.9-3.el6_0.1.x86_64 pam_ldap-185-11.el6.x86_64 openldap-2.4.23-32.el6_4.1.x86_64 My /etc/nslcd.conf is [root@www]# tail -n 7 /etc/nslcd.conf uid nslcd gid ldap # This comment prevents repeated auto-migration of settings. uri ldap://ldap.domain.com base dc=domain,dc=com binddn uid=zimbra,cn=admins,cn=zimbra bindpw **pass** ssl no tls_cacertdir /etc/openldap/cacerts When i run [root@www ~]# id username id: username: No such user But i am sure that username user exist on ldap server. EDIT : When i run ldapsearch command i got all result with credentials and dn. [root@www ~]# ldapsearch -H ldap://ldap.domain.com:389 -w **pass** -D uid=zimbra,cn=admins,cn=zimbra -x 'objectclass=*' # extended LDIF # # LDAPv3 # base <dc=domain,dc=com> (default) with scope subtree # filter: objectclass=* # requesting: ALL # # domain.com dn: dc=domain,dc=com zimbraDomainType: local zimbraDomainStatus: active . . .

    Read the article

  • Ripping a home video VCD on Linux or Windows with VLC or otherwise

    - by user259774
    I have a VCD with 22 minutes of video on it. I would like to retain this footage and throw away the VCD. I can play the whole thing with VLC ("open disc - vcd - /dev/sr0 - play"): all 22 minutes of the main track. I don't believe there's any other content aside from the main track. I can seek to anywhere I want to within the 22 minute track. If I mount /dev/sr0 /media/vcd and then try to copy the only file from the MPEGAV folder, I get an I/O error, with an empty destination file. VLC has a "convert" option in addition to "play". When I use this I actually get a good OGG file back, after it runs through the video in painful real-time. I guess it dubs it frame-by-frame. But the file is only 10 minutes long, leaving 12 minutes off of the track. Handbrake doesn't detect it's track titles, unfortunately. I don't know if I should start getting involved with GNU ddrescue or if it's because VCDs somehow encode their data sectors differently. Anyway, I'm in way over my head and if anyone knows how I could get that video track off the thing, feel free to share! Edit: I should note that I also have access to a Windows computer

    Read the article

  • Arch linux - strange behaviour after installing fglrx

    - by kosto
    I have a problem with drivers on arch linux. I installed catalyst through unnoficial catalyst repo as wiki says. pacman -S catalyst catalyst-utils aticonfig --initial After this operation i rebooted the system. KDM loaded succesfully, but when i tried to switch to console (ctrl+alt+1/2/3) i saw only some strange dots, like pixels from the text were splitted on the whole screen. I was able to go back to kdm and enter the account details tho. This gave me a hang just before kde loaded. Here's a video where i'm showing above actions. Anybody knows what caused the problem? I can still chroot to fix some issues. Thanks for interest. http://glothriel.org/arch/arch_problem.ogg same thing on gnome / gdm, that's my second try on installing catalyst on arch. Open drivers suck the battery 2x faster. ___________EDIT_____________ Ok, i found a sollution, so i'm posting if someone else shares my problem. Catalyst does not support KMS, so you need to disable it from grub. You must know where are your /etc and /boot paritions mounted. If you have only one partition for / it's even simplier. Mount / on /mnt mount /dev/sdaX /mnt where X is number of the partition where is your / installed arch-chroot /mnt nano /etc/default/grub and add line: GRUB_CMDLINE_LINUX="nomodeset" save and quit then run (this will delete your windows grub configuration) grub-mkconfig -o /boot/grub/grub.cfg exit umount /mnt reboot

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >