Search Results

Search found 9963 results on 399 pages for 'special commands'.

Page 295/399 | < Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >

  • Have an Input/output error when connecting to a server via ssh

    - by Shehzad009
    Hello I seem to be having a problem while connecting to a Ubuntu Server while connecting via ssh. When I login, I get this error. Could not chdir to home directory /home/username: Input/output error It seems like my home folder is corrupt or something. I cannot ls in the home folder directory, and in my usename directory, I can't cd into this. As root I cannot ls in the home directory as well or in any directory in Home. I notice as well when I save in vim or quit, it get this error at the bottom of the page E138: Cannot write viminfo file /home/root/.viminfo! Any ideas? EDIT: this is what happens if I type in these commands mount proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) /dev/mapper/RAID1-lvvar on /var type xfs (rw) /dev/mapper/RAID5-lvsrv on /srv type xfs (rw) /dev/mapper/RAID5-lvhome on /home type xfs (rw) /dev/mapper/RAID1-lvtmp on /tmp type reiserfs (rw) dmesg | tail [1213273.364040] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213274.084081] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213309.364038] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213310.084041] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213345.364039] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213346.084042] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213381.365036] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213382.084047] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213417.364039] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213418.084063] Filesystem "dm-4": xfs_log_force: error 5 returned. fdisk -l /dev/sda Cannot open /dev/sda

    Read the article

  • Why can't Logman start?

    - by Bill Paetzke
    I'm setting up my first logman counter. But it's not working! There is some file or folder permissions problem. Or maybe I wrote the create-counter statement wrong. Here's my counter commands: logman create counter BillTest -si 30 -v nnnnnn -max 200 -o "C:\Temp" -c "\Processor(*)\*" "\Memory(*)\*" "\LogicalDisk(*)\*" logman start BillTest The first command works. It says counter creation successful. The second command fails: Collection "BillTest" did not start, check the application event log for any errors Here's the error in the Event Viewer: The service was unable to open the log file C:\Temp_000001.blg for log BillTest and will be stopped. Check the log folder for existence, spelling, permissions, and ensure that no other logs or applications are writing to this log file. You can reenter the log file name using the configuration program. This log will not be started. The error returned is: Access is denied. I verified that C:\Temp exists. I'm not a permissions guru, but I did set all the accounts in the security tab of that folder to "full control." Still, the logman start command failed with the same error. I noticed that it was trying to write to C:\Temp_000001.blg instead of C:\Temp\000001.blg. That might be part of the problem. So, I tried to update my counter to "C:\Temp\" instead of "C:\Temp", but that failed with a path-invalid error. Also, all the examples I saw online used did not put a trailing slash. So, no dice there. I tried this on my machine (Windows XP) and my dev server (Windows Server 2003). Both failed with the same error. How can I fix this?

    Read the article

  • Laptop authentication/logon via accelerometer tilt, flip, and twist

    - by wonsungi
    Looking for another application/technology: A number of years ago, I read about a novel way to authenticate and log on to a laptop. The user simply had to hold the laptop in the air and execute a simple series of tilts and flips to the laptop. By logging accelerometer data, this creates a unique signature for the user. Even if an attacker watched and repeated the exact same motions, the attacker could not replicate the user's movements closely enough. I am looking for information about this technology again, but I can't find anything. It may have been an actual feature on a laptop, or it may have just been a research project. I think I read about it in a magazine like Wired. Does anyone have more information about authentication via unique accelerometer signatures? Here are the closest articles I have been able to find: Knock-based commands for your Linux laptop Shake Well Before Use: Authentication Based on Accelerometer Data[PDF] Inferring Identity using Accelerometers in Television Remote Controls User Evaluation of Lightweight User Authentication with a Single Tri-Axis Accelerometer Identifying Users of Portable Devices from Gait Pattern with Accelerometers[PDF] 3D Signature Biometrics Using Curvature Moments[PDF] MoViSign: A novel authentication mechanism using mobile virtual signatures

    Read the article

  • .NET 3.5 installation comes up with Error 0x800F0906, then 0x800F0081F using dism

    - by Austin Meadows
    I've recently tried installing .NET 3.5 for an application on Windows 8.1. I used the OS's popup thing to download/install .NET 3.5 and always get error code 0x800F0906. Upon further research, I found I would have to pop in my Windows 8 CD and install it with this command, where "E:\" is where my CD is mounted: Dism /online /enable-feature /featurename:NetFx3 /All /Source:E:\sources\sxs /LimitAccess This and any derivative of it (e.g., removing /LimitAccess) has not worked for me and has either given me the same error code (0x800F0906) or a different one, 0x800F0081F. I've even copied the sxs folder to my hard drive, just in case something was going on with the CD Drive, only to have the same results. In that case, I used this command line: Dism /online /enable-feature /featurename:NetFx3 /All /Source:C:\dotnet35 /LimitAccess I find this surreal because in both cases, the files are indeed there but the program thinks it's not. Here's the CBS.log file. Any ideas on how to fix this? Any help is very appreciated :) EDIT: I now have a proper dism.log file, I'm not sure what happened to the last one or why it did that. Here's the link to the new log file. It's interesting to note that it doesn't recognize some of the commands in the script such as "featurename" or "source".

    Read the article

  • Running Upstart user jobs on startup

    - by dgel
    I am running Ubuntu server 11.04. I have created an Upstart user job as described here. I have the following file at my /home/myuser/.init/sensors.conf: start on started mysql stop on stopping mysql chdir /home/myuser/mydir/project exec /home/myuser/mydir/env/bin/python /home/myuser/mydir/project/manage.py sensors respawn respawn limit 10 90 As myuser I can start, stop, and reload the job fine- it works perfectly: $ start sensors sensors start/running, process 1332 $ stop sensors sensors stop/waiting The problem is that the job is not starting automatically at boot when mysql starts. After a fresh boot, mysql is running but my sensors job is not. What's strange, is that although the job doesn't begin on bootup, if I use sudo to restart mysql it does indeed start my job. The following commands are run as myuser from a fresh startup: $ status sensors sensors stop/waiting $ sudo restart mysql mysql start/running, process 1209 $ status sensors sensors start/running, process 1229 The documentation for Upstart user jobs is pretty limited. What is the correct technique to have a user job start automatically on startup of the system? I know I can just throw something in rc.local to start it, or I could move my sensors.conf to /etc/init but I'm curious if there is a way to do it using just Upstart.

    Read the article

  • OpenSSL: how to setup an OCSP server for checking third-party certificates?

    - by StackedCrooked
    I am testing the Certificate Revocation functionality of a CMTS device. This requires me to setup a OCSP responder. Since it will only be used for testing I assume that the minimal implementation provided by OpenSSL should suffice. I have extracted the a certificate from a cable modem, copied it to my PC and converted it to the PEM format. Now I want to register it in the OpenSSL OCSP database and start a server. I have completed all these steps, but when I do a client request my server invariably responds with "unknown". It seems to be completely unaware of my certificate's existence. I would greatly appreciate if anyone would be willing to have a look at my code. For your convenience, I have created a single script consisting of a sequential list of all used commands, from setting up the CA until starting the server: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/AllCommands.sh You can also find the custom config file and the certificate that I am testing with: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/ Any help would be greatly appreciated.

    Read the article

  • OpenSSL: how to setup an OCSP server for checking third-party certificates?

    - by StackedCrooked
    I am testing the Certificate Revocation functionality of a CMTS device. This requires me to setup a OCSP responder. Since it will only be used for testing I assume that the minimal implementation provided by OpenSSL should suffice. I have extracted the a certificate from a cable modem, copied it to my PC and converted it to the PEM format. Now I want to register it in the OpenSSL OCSP database and start a server. I have completed all these steps, but when I do a client request my server invariably responds with "unknown". It seems to be completely unaware of my certificate's existence. I would greatly appreciate if anyone would be willing to have a look at my code. For your convenience, I have created a single script consisting of a sequential list of all used commands, from setting up the CA until starting the server: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/AllCommands.sh You can also find the custom config file and the certificate that I am testing with: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/ Any help would be greatly appreciated.

    Read the article

  • What's up with stat on Macos/Darwin? Or filesystems without names...

    - by Charles Stewart
    In response to a question I asked on SO, Give the mount point of a path, one respondant suggested using stat to get the device name associated with the volume of a given path. This works nicely on Linux, but gives crazy results on Macos 10.4. For my system, df and mount give: cas cas$ df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk0s3 58342896 49924456 7906440 86% / devfs 194 194 0 100% /dev fdesc 2 2 0 100% /dev 1024 1024 0 100% /.vol automount -nsl [166] 0 0 0 100% /Network automount -fstab [170] 0 0 0 100% /automount/Servers automount -static [170] 0 0 0 100% /automount/static /dev/disk2s1 163577856 23225520 140352336 14% /Volumes/Snapshot /dev/disk2s2 409404102 5745938 383187960 1% /Volumes/Sparse cas cas$ mount /dev/disk0s3 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) on /.vol automount -nsl [166] on /Network (automounted) automount -fstab [170] on /automount/Servers (automounted) automount -static [170] on /automount/static (automounted) /dev/disk2s1 on /Volumes/Snapshot (local, nodev, nosuid, journaled) /dev/disk2s2 on /Volumes/Sparse (asynchronous, local, nodev, nosuid) Trying to get the devices from the mount points, though: cas cas$ df | grep -e/ | awk '{print $NF}' | while read line; do echo $line $(stat -f"%Sdr" $line); done / disk0s3r /dev ???r /dev ???r /.vol ???r /Network ???r /automount/Servers ???r /automount/static ???r /Volumes/Snapshot disk2s1r /Volumes/Sparse disk2s2r Here, I'm feeding each of the mount points scraped from df to stat, outputing the results of the "%Sdr" format string, which is supposed to be the device name: Cf. stat(1) man page: The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: ... dr Display actual device name. What's going on? Is it a bug in stat, or some Darwin VFS weirdness? Postscript Per Andrew McGregor, try passing "%Sd" to stat for more weirdness. It lists some apparently arbitrary subset of files from CWD...

    Read the article

  • CentOS security for lazy admins

    - by Robby75
    I'm running CentOS 5.5 (basic LAMP with Parallels Power Panel and Plesk) and have thus far neglected security (because it's not my full-time job, there is always something more important on my todo-list). My server does not contain any secret data and also no lives depend on it - Basically what I want is to make sure it does not become part of a botnet, that is "good enough" security in my case. Anyway, I don't want to become a full-time paranoid admin (like constantly watching and patching everything because of some obscure problem), I also don't care about most security problems like DOS attacks or problems that only exist when using some arcane settings. I'm in search of a "happy medium", for example a list of known important problems in the default installation of CentOS 5.5 and/or a list of security problems that have actually been exploited - not the typical endless list of buffer overflows that "maybe" a problem in some special case. The problem that I have with the usually recommended approaches (joining mailing lists, etc.) is that the really important problems (something where an exploit exists, that is exploitable in a common setup and where the attacker can do something really useful - i.e. not a DOS) are completely and utterly swamped by millions of tiny security alerts that surely are important for high-security servers, but not for me. Thanks for all suggestions!

    Read the article

  • Windows Bluescreen - atikmpag.sys

    - by Mochan
    Information Name: atikmpag.sys bluescreen (BSOD or BlueScreen of Death) Error code: 0x00000116 Appears when: Playing games, watching videos Can be reproduced: Yes Cause: Graphics Card is the main assumption System Specifications Before we begin - I will inform you of my specifications. OS: Windows 7 x64 Home Edition Model: Dell Inspiron 15R Special Edition (aka Inspiron 7520) (Add 2GB of RAM to the model linked) Hard Drive: 1TB CPU: Intel Quad-Core i7 Sandy Bridge (I think) Processor at 2.10GHz (I think it can be clocked to 3GHz?) RAM: 6GB (I think 1 x 4GB and 1 x 2GB) Display: 15.6" HD (1366x768) Graphics: AMD Radeon HD 7500M 2GB Details So now that you know some basics about my computer, I'll get to the problem. Being an Ubuntu user I hardly use Windows, but occasionally I do. Like to run Skyrim and other games incompatible with Linux and WINE. The new Sims 3 Seasons patch is also now not supported. When playing these two games and other ones, theoretically. I have also heard others saying that while watching HD movies and video series it also happens. While watching the bluescreen as it happens, I see it is the 'atikmpag.sys' error. I have not installed much and nothing significant. I think I have downloaded Skyrim, Firefox and The Sims 3. I haven't done much more... since Ubuntu is definitely the best in comparison! (No hate, just a joke :P). I can reproduce it easily (just by running a game for less than a minute). It is always there each time, but it's never at a specific time or anything. So far I have found that it may be caused by lack of power to the graphics card, or it may be damaged or fried. Since I've had the computer for a mere 4 months (and have had other problems with it also). I have contacted Dell but they are useless beyond belief. Anyone with any information, solutions or details are encouraged to share your knowledge, as it would be immensely appreciated.

    Read the article

  • Mysql cluster strange behaviour

    - by Champion
    Hi Guys, I have 2 mysql clusters on two different servers with management node on each of them. It went down someway. I ran following commands to start the cluster: Start the management node on srv1: srv1: mysqlc/bin/ndb_mgmd --initial -f my_cluster/conf/config.ini --configdir=/home/mysql_cluster/my_cluster/conf Start the management node on srv2: srv2: mysqlc/bin/ndb_mgmd --initial -f my_cluster/conf/config.ini --configdir=/home/mysql_cluster/my_cluster/conf Start the ndbd nodes on srv1: srv1: mysqlc/bin/ndbd --initial -c localhost:1186 Start the ndbd nodes on srv2: srv2: mysqlc/bin/ndbd --initial -c localhost:1186 Start mysqld server on srv1: srv1: mysqlc/bin/mysqld --defaults-file=my_cluster/conf/my.cnf --user=root & and here is the problem. mysql server not loading the data. Only database names are present. All the tables which are ENGINE=ndbcluster are not being loaded. Tables with ENGINE=myisam are being loaded. Backup scripts helped me load the data. But this way I can't use cluster setup. Similar issue appeared when i started srv2. How can I resolve this issue ?

    Read the article

  • How to enable Ipv6 on my ubunutu 11.04 virtual machine

    - by liv2hak
    I have installed 3 VM's on my PC.(Ubuntu 11.04).I want to setup an IPV6 network to review and test some of the IPV6 tools like NDPMonitor.(monitors ICMP messages of Neighbour Discovery Protocol.) IP v6 addresses are as follows. linux_router - fe80::a00:27ff:fed5:f7e9/64 labhack1 - fe80::a00:27ff:fed2:8bd1/64 labhack2 - fe80::a00:27ff:fed7:2f2d/64 The below commands have been run on both linux_router and labhack1. sudo ip r a 2001:468:181:f100::/64 dev eth0 sudo vim /etc/radvd.conf /file looks like below./ interface eth0 { AdvSendAdvert on; /*means that we are sending out advertisments.*/ MinRtrAdvInterval 5; /*these options control how often advertisments are sent*/ MaxRtrAdvInterval 15; /*these are not mandatory but valueable settings.*/ prefix fe80::a00:27ff::/64 { AdvOnLink on; /*Says to the host everyone sharing this prefix is on the sam local link as you.*/ AdvAutonomous on; /*Says to a host: "Use this prefix to autoconfigure your address"*/ }; }; sudo sysctl -w net.ipv6.conf.all.forwarding=1 I try to do a ping6 -I eth0 fe80::a00:27ff:fed5:f7e9 I get Destination unreachable: Address unreachable.I am not sure what I am doing wrong here.I am a beginner at linux administration.Basically I think I am missing whatever is analogous to physically connecting VM's.Any help that would point me in the right direction would be greatly appreciated.

    Read the article

  • nconf deployment.ini configuration for a basic Nagios server on CentOS 6.2

    - by jshin47
    I have set up nconf and Nagios but I cannot figure out how to configure deployment.ini to properly deploy the generated configuration to /usr/local/nagios/etc. Here are the directory listings of interest: [jshin@nag0 tmp]$ ls Default_collector global [jshin@nag0 tmp]$ cd Default_collector/ [jshin@nag0 Default_collector]$ ls advanced_services.cfg hostgroups.cfg service_dependencies.cfg services.cfg host_dependencies.cfg hosts.cfg servicegroups.cfg [jshin@nag0 Default_collector]$ cd .. [jshin@nag0 tmp]$ cd global/ [jshin@nag0 global]$ ls checkcommands.cfg contacts.cfg misccommands.cfg timeperiods.cfg contactgroups.cfg host_templates.cfg service_templates.cfg [jshin@nag0 global]$ cd .. [jshin@nag0 tmp]$ cd /usr/local/nagios/etc/ [jshin@nag0 etc]$ ls cgi.cfg htpasswd.users nagios.cfg objects resource.cfg [jshin@nag0 etc]$ cd objects/ [jshin@nag0 objects]$ ls commands.cfg localhost.cfg switch.cfg timeperiods.cfg contacts.cfg printer.cfg templates.cfg windows.cfg Here is my deployment.ini (pretty much the default setting) ;; LOCAL deployment ;; [extract config] type = local source_file = "/var/www/html/nconf/output/NagiosConfig.tgz" target_file = "/tmp/" action = extract [copy collector config] type = local source_file = "/tmp/Default_collector/" target_file = "/usr/local/nagios/etc/Default_collector/" action = copy [copy global config] type = local source_file = "/tmp/global/" target_file = "/usr/local/nagios/etc/global" action = copy reload_command = "service nagios restart" What I am wondering is why the directory structure that the default deployment.ini seems to suggest, with Default_collector and global, is different from the one that Nagios has by default, with only a folder called objects. What am I missing? Or more importantly, how does your deployment.ini look?

    Read the article

  • Get IP or MAC addresses of Windows Multipoint Server 2012 stations?

    - by user1454265
    Is it possible to programmatically retrieve the IP or MAC address of a station assigned to a Windows MultiPoint Server 2012 host, using PowerShell or any other .NET or Windows API? Background: I'm developing a application to help set up USB-over-Ethernet zero clients in a WMS 2012 setup, bridging the PowerShell "WmsCmdlets" module (Microsoft.WindowsServerSolutions.MultipointServer.PowerShell.Commands.Library.WmsStation) and a third-party vendor API for configuring zero client IP address, etc. So far, I do not know any means of matching up the "stations" of the WmsCmdlets with the zero client objects in the vendor's API. Finding out the IP or MAC associated with a WMS station would do nicely, since I have this on the zero client API side. However, I haven't found any information I could use in the PowerShell WmsCmdlets module, such as Get-WmsStation which returns the following: Id : 1 Name : <my station name> IsAutoLogOn : False IsSplit : False CollabId : 0 RemoteConnectionServerName : VirtualMachineName : VirtualMachineId : AutoLogOnUserName : AutoLogOnPassword : DeviceTypes : {DT_Mouse, DT_Keyboard, DT_Audio, DT_MassStorage...} DeviceCounts : {2, 2, 0, 0...} ComputerName : <my WMS host server name> SessionId : 4294967295 SessionHostServer : <my WMS host server name>

    Read the article

  • how to properly set environment variables

    - by avorum
    I've recently started using Windows (having used Ubuntu up until now) and I find myself unable to properly set environment variables. Whenever I set them they don't seem to work. I've been going to Start-Edit Environment Variables for your Account and editing the PATH value in the upper half of the GUI. Here's what I've got so far. ;C:\Chocolatey\bin;C:\tools\mysql\current\bin;C:\Program Files (x86)\Git\bin;C:\Program Files\MySQL\MySQL Server 5.6\bin\;C:\Python33\Scripts; These are each the parent directories of the executables I'd like to be able to run by name from CMD, but mysql, git, and pip aren't being recognized. Am I doing something wrong syntactically or at a general understanding level? I'd like to be able to run these commands without having to specify the full path to the executables every time. EDIT: The full PATH extracted from CMD PATH=C:\Python33\;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files (x86)\AMD APP\bin\x86_64;C:\Program Files (x86)\AMD APP\bin\x86;C:\Program Files\Common Files\Microsoft Shared\Windows Live;C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\GTK2-Runtime\bin;C:\Program Files\WIDCOMM\Bluetooth Software\;C:\Program Files\WIDCOMM\Bluetooth Software\syswow64;C:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-Static;C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\;C:\Program Files\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files (x86)\QuickTime\QTSystem\;C:\Program Files (x86)\Common Files\Acronis\SnapAPI\;C:\Program Files (x86)\Java\jre7\bin;C:\Program Files (x86)\Windows Kits\8.1\Windows Performance Toolkit\;C:\Program Files (x86)\Microsoft SDKs\TypeScript\;C:\Program Files (x86)\MySQL\MySQL Utilities 1.3.4\; ;C:\Chocolatey\bin;C:\tools\mysql\current\bin I'm being forced to use Windows by my work environment, I don't enjoy the state of affairs.

    Read the article

  • In GNU Screen, Recalled bash history command displays one character position to the left of actual location

    - by vergueishon
    I am running Red Hat 5 32-bit (2.6.18-194.26.1.el5). The issue is that when I recall any previous command in bash's history, the first character in the command is displayed immediately after the shell prompt, without any intervening space, likeso: \[me@mymachine tmp]$man mysql If I enter a Ctrl-C, and retype the command, it looks likeso: \[me@mymachine tmp]$ man mysql This makes recalling a command and editing it before re-entering a real pain. Basically, if I try to edit a recalled command, my changes occur one character position to the left (I believe) of what I see on the screen. It's a bit tedious to describe, and appears to only happen with commands with a large number of arguments. UPDATE: The contents of /etc/sysconfig/bash-prompt-screen, 1 #!/bin/bash 2 echo -n $'\033'"_${USER}@${HOSTNAME%%.*}:${PWD/#$HOME/~}"$'\033\\\\' and the contents of /etc/bashrc, 24 screen) 25 if [ -e /etc/sysconfig/bash-prompt-screen ]; then 26 PROMPT_COMMAND=/etc/sysconfig/bash-prompt-screen 27 else 28 PROMPT_COMMAND='echo -ne "\033_${USER}@${HOSTNAME%%.*}:${PWD/#$HOME/~}"; echo -ne "\033\\"' 29 fi 30 ;; I've disable bash-prompt-screen by renaming it--this fixed it. It's entirely possible that there is a fix to the bash-prompt-screen prompt line in the latest version of screen for RHEL 5. The error is seen under Screen version 4.00.03 (FAU) 23-Oct-06. (I noticed an update in the queue, which is installing as I write this.)

    Read the article

  • mysql cmd promt import data.sql

    - by udhaya
    i wanna import sql using cmd prompt. first open windows cmd prompt, navigate to xampp/mysql/bin folder & run mysql this error occurs D:\Program Files\xampp\mysql\bin>mysql ERROR 1045 (28000): Access denied for user 'ODBC'@'localhost' (using password: N O) D:\Program Files\xampp\mysql\bin>mysql -u root -p -h localhost dev1base < dev1b ase.sql Enter password: D:\Program Files\xampp\mysql\bin> D:\Program Files\xampp\mysql\bin>mysql -u root Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 104 Server version: 5.0.51a Source distribution Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> mysql> -h localhost dev1base < dev1base.sql -> -> -> ->

    Read the article

  • Accounting setup in freeradius with mikrotik and the "always" module

    - by Matt
    I have a freeradius setup that is being used to provide authentication for users on a wireless network. The access points are all Mikrotik hardware and the users are connected 24/7. We've been using Daloradius with mysql and freeradius 2. The boss wants to use the accounting information and while this is all set up and appears to be working, I've found that not all the accounting information is present. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly. So he started poking around at this link: http://wiki.mikrotik.com/wiki/RouterOs_MySql_Freeradius#Configuring_RouterOs_for_Radius_.26_PPP.2A_AAA And was looking specifically at the following section. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly always fail { rcode = fail } always reject { rcode = reject } always ok { rcode = ok simulcount = 0 mpp = no } However, that link references freeradius 1 and I can't find this in the radius.conf file for freeradius 2. What does it do and could it be a reason I'm missing data? EDIT: I have found one issue. We have a backup freeradius server that is also receiving the accounting packets. Although they are replicating, it's only a master/slave configuration. If the slave receives accounting packets it won't replicate them back to the master. Although I suspect this might solve it, the boss is not convinced due to the always module. Is there anything special I need to configure in the mikrotik AP's or freeradius 2 for clients connected 24/7.

    Read the article

  • Midnight Commander Woes: Output while panels are active, and tab completion.

    - by Eddie Parker
    I'm trying out midnight commander (loved Norton back in the day!) and I'm finding two things hard to work out. I'm curious if there's ways around this or not however. 1) If the panels are active and I issue a command that has a lot of output, it appears to be lost forever. i.e., if the panels are visible and I cat something (i.e., cat /proc/cpuinfo), that info is gone forever once the panels get redrawn. Is there anyway to see the output? I've tried 'ctrl-o', but it appears to just give me a fresh sub-shell and wipes the previous output away. Pausing after every invocation is a bit irritating, so I'd rather not use that option. 2) Tab completion for commands When mc is running, it consumes the tab character for switching panels. Is there any way to get around this so I can still type in paths and what not on the command line? I'm running cygwin if that matters at all.

    Read the article

  • Automated software installation for MS Windows?

    - by Duncan Bayne
    I am currently setting up a Windows development environment (the whole Visual Studio 2010 stack plus plugins on top of Windows 7). This has got me wondering whether there's a Windows equivalent to what I do for dev environment setup in Ubuntu. It takes literally hours to get a dev environment set up in Windows, involving a lot of manual intervention. On Ubuntu, I have two shell scripts - one I run as root which configures the system using apt-get (amongst other things), one I run as me which configures my user account. Those scripts live in my private Subversion repository. To set up a dev environment from scratch requires five commands: sudo apt-get install -y subversion svn co http://svn.XXXX.XXX/personal/ cd personal sudo ./ubuntu_setup_root.sh ./ubuntu_setup_user.sh The only human intervention required is to pick a root password for MySQL. So it takes only a few minutes of human attention to go from a vanilla Ubuntu installation to a full development environment with the latest builds of everything, perfectly tailored down to shortcut keys and wallpaper. Is there an equivalent process for Windows? In an ideal world it'd be something trivially scriptable using C# Script or Powershell, which could live in source control & make use of a repository of ISOs downloaded from MSDN ...

    Read the article

  • Win 2008 R2 terminal server and redirected printer queue security

    - by Ian
    I have a case where I need a non-priv account to be able to make a modification to the redirected printer. I know, its not advisable but we're not giving them access - changes will be made in code. So, following the docs (http://technet.microsoft.com/en-us/library/ee524015(WS.10).aspx) I modified the default security for new printer queues. This doesnt work though as windows doesn't seem to assign the privs you configure in the printer admin tool to redirected printer queues. As I test I added a non-priv test user to the default security tab in the printer admin tool (control panel - admin tools - printer admin. I assigned it all privs (its a test) and logged the user into the terminal server. The redirected printers duely appeared as usual. However if I open the printer properties - security tab, the user appears in the list of accounts/groups but the options I selected (all privs) are not set. Instead the user special privs box is marked and when I click on 'advanced options' and view them, there is nothing marked. So, something is clearing these options.... the question is, why and how can I convince it not to? Ian

    Read the article

  • Performance Test and TCP tuning

    - by Mithir
    We are in the process of performance testing an application which receives tcp requests converts them to soap requests (WCF-httpBinding) which other services work on. The server is Windows Server 2008 R2. The TCP requests are received by TcpListener instance (.NET C#). There are 3 http-binded WCF services running on the same server. We have built a performance test client which goal is to simulate multiple concurrent requests(each request has to be different and recognizable by the application). We built a test running 150 requests that run on the same time (by 150 different threads), and we noticed straight away that some requests get the TCP connection slowly, but once they get it, they act fast. A single request writes twice on the same connection- request and an application ack. Although a single request+ack can take about 150ms, the 150 test takes about 7 seconds. The Problem When we try to run this test from 2 different computers we lose requests. some clients requests are getting no connection was made because the target machine actively refused it So I got here and got convinced it was because of the backlog. I changed the TcpListener parameters and did the registry AFD backlog changes written here but it still didn't work, so I inserted all of the TCP tuning suggested plus some netsh commands which were recommended, but still no change, we still get that error. Is there anything else I need to know? Are there any other solutions?

    Read the article

  • APACHE - PHP - Bounce emails

    - by user1179459
    I want to improve our mail server lists by handling all the bounces we get in our websites. I have a website which has over 8000+ users and another website which as over 1500+ users, they are emailed various notifications every second, ie. job alerts, email alerts, I am using POP connections with EXIM on APACHE server, most scripts are based on PHP language generates email on the fly. Problems i have But some users has registered long time ago and now quite few has bounce email address Some users register with dummy emails like [email protected] which never existed but a valid email address, any chance of stopping this without asking them login to the email account and clicking links which dont work most times.. (too annoying to the end user) Server is sending unnecessary emails can be avoided if i know they dont exists ? Solutions i need to have Is there a way i can download the bounce email list somewhere (WHM/Cpanel), i know exim mail has it but its not readable (i need a file like CSV or something similar to scan them over and write a php script to delete the users from the database ?) I need to know if there is any function in the PHP that can check the existence of the email address on the fly ? so that i can set the email send function in the mailer class to check before it sends out. On the server will bouncing emails are going to eatup lots of server resources ? like memory/cpu on processing them ? or are they are minimal where we dont have to worry about this at all ? May be a opensoruce or linux software to capture them and view them in a report basis and cleaning them up ? I am not a linux expert or server admin but i do lot of PHP coding, so please be descriptive of the solutions specially if they are linux commands Thank you!

    Read the article

  • Open Source Chef Server can't upload cookbook

    - by veilig
    I just setup the open source chef server on an Ubuntu 12.04 EC2 instance, I've setup my webui and am able to get responses from my knife commands ie: knife node list, knife client list, knife user list, etc... I'm able to update roles, databags, environments, etc... but I cannot upload any cookbooks. I'm running my workstation on Mac OSX. I keep getting this output at the end of my command knife cookbook upload -VV curl. Doesn't matter what cookbook I upload, or if I upload them all - I keep getting the same response DEBUG: Chef::HTTP calling Chef::HTTP::ValidateContentLength#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::RemoteRequestID#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::Authenticator#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::Decompressor#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::CookieManager#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::JSONToModelOutput#handle_response /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http/json_output.rb:51:in `handle_response': undefined method `chomp' for nil:NilClass (NoMethodError) from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:229:in `block in apply_response_middleware' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `each' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `inject' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `apply_response_middleware' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:144:in `request' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:118:in `put' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/cookbook_uploader.rb:123:in `block in uploader_function_for' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:52:in `call' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:52:in `block (3 levels) in process' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:50:in `loop' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:50:in `block (2 levels) in process'INFO: HTTP Request Returned 204 No Content:

    Read the article

  • Subversion Edge LDAP (require CAC Certificate not Username and Password)

    - by Frank Hale
    What I've Done: I've successfully installed and configured Subversion Edge 3.1.2 with LDAP support on a Windows 2008 server. I have configured LDAP users and am able to use LDAP credentials to work on repositories just fine. No issues whatsoever. Works great! What I Want To Do: I've been searching for several hours now in hopes to find some information on how to configure Subversion Edge server to require client certificates for user authentication against an LDAP environment. I have not found anything yet that gives me an indication of how to do it. I know there are SVN clients that are capable of prompting for CAC certificates but I cannot figure out how to set my server up to require it. NOTE: CAC authentication is already setup and working in the windows environment. Desired Outcome: When running svn commands that require authentication against my Subversion Edge Server I want it to prompt me for my CAC certificate instead of my Active Directory username and password. If anyone has any information on this I'd greatly appreciate it. EDIT: I'm still digging so if I find out anything I'll update this question with what I found.

    Read the article

< Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >