Search Results

Search found 30046 results on 1202 pages for 'linq via c series'.

Page 447/1202 | < Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >

  • Looking for a help desk ticketing system..

    - by Dan
    Hi guys Im looking for a good help desk ticket solution. It must perform the following actions for it to be useful. It needs to have a single point of contact via email..e.g [email protected] If we recieve a telephone(or an email outside of the system) we need to be able to create a ticket as if had been added via the single point of contact, this needs to be done with ease in order to save time. Certain people within our organisation deal with certain customers, so if the email/ custom input support call as mentioned in bullet 2 is picked up as having a relationship with that certain person in our organisation it needs to be sent to them/put in their queue for them to work on. If a person is out of office or sick any tickets sent to them must be forwarded to somebody else or put into a seperate pool of tickets that anybody can access. Perhaps have an agent that sorts through tickets in the pool and assigns them to anybody who is available, preferably the person with fewest tickets in their queue/list. Once a customer emails and the system logs it they immediately get a response with a ticket number and maybe details of who is dealing with the problem. Any correspondance in relation to a particular ticket is automatically grouped into some sort of message, and not made into a load of separate tickets. I.e system scans incoming email subjects for ticket numbers and assosciates it with exisiting tickets if that number exists. Any help is much appreciated Thanks P.S I have taken a look at OTRS but i'm not feeling it so unless someone can convince me I guess i'm after an alternative.

    Read the article

  • SOGo installation on Mail Server

    - by i.h4d35
    We run a normal mail server on cPanel for web-based email. We've just got a request to add Calendar, address book, tasks functions; mobile capabilities (I'm guessing acces via a mobile client/app); public folders etc. On the client-side, we have some people using webmail, some use MS Outlook and some others use Mozilla Thunderbird. Having looked around, I zeroed in on SOGo, Citadel and kolab as options for this. I read through SOGo's official install guide and also checked here and here. However, I see most of the HowTo's ask installation of MySQL/PgSQL, LDAP, Samba etc. While I can manage installation of Samba (if required), I have no idea if installing LDAP, MySQL etc is really required. Also, any guidance as to how to install on a regular mail server would be appreciated. Sorry if this sounds vague. If any more information is required, I'll be happy to give it. Thanks in advance. Edit: This server in question has always been governed via cPanel (to install PHP, MySQL, configure DNS etc). So I am confused if really need LDAP.

    Read the article

  • stunnel client uses improper SNI when talking to Apache

    - by Huckle
    I have stunnel listening on port 80 and acting as a client connecting to Apache listening on port 443. Configuration is below. What I'm finding is that if I attempt to connect to localhost:80 the connection is fine but if I connect to 127.0.0.1:80 When I check Apache's logs it indicates that stunnel is using localhost as the SNI both times, but the HTTP request lists localhost in one case and 127.0.0.1 in another. Is it possible to tell stunnel to either use whatever is in the HTTP request or to somehow configure two clients each with different SNI values? stunnel.conf: debug = 7 options = NO_SSLv2 [xmlrpc-httpd] client = yes accept = 80 connect = 443 Apache error.log: [error] Hostname localhost provided via SNI and hostname 127.0.0.1 provided via HTTP are different Apache access.log: "GET / HTTP/1.1" 200 2138 "-" "Wget/1.13.4 (linux-gnu)" "GET / HTTP/1.1" 400 743 "-" "Wget/1.13.4 (linux-gnu)" wget: $wget -d localhost ---request begin--- GET / HTTP/1.1 User-Agent: Wget/1.13.4 (linux-gnu) Accept: */* Host: localhost Connection: Keep-Alive ---request end--- $wget -d 127.0.0.1 ---request begin--- GET / HTTP/1.1 User-Agent: Wget/1.13.4 (linux-gnu) Accept: */* Host: 127.0.0.1 Connection: Keep-Alive ---request end--- edit: Apache Config Nothing out of the ordinary, it's just a virtual host listening to 443 <VirtualHost *:443>

    Read the article

  • How can I cache a Subversion password on a server, without storing it in unencrypted form?

    - by Zilk
    My Subversion server only provides access via HTTPS; support for svn+ssh has been dropped because we wanted to avoid creating system users on that machine just for SVN access. Now I'm trying to provide a way for users to cache their passwords for a while, without leaving them stored on the filesystem in unencrypted form. This is no problem for Gnome or KDE users, because they can use gnome-keyring and kwallet, respectively. IIRC, TortoiseSVN has a similar caching mechanism, too. But what about users on a non-GUI system? Some context: in this case, we have a development/testing server where one project has been checked out into the Apache htdocs directory. Development for this project is almost complete, and only minor text/layout changes are performed directly on this server. Nevertheless, the changes should be checked into the repository. There's no kwallet and no gnome-keyring on this system, and the ssh-agent can't help because the repository is accessed via https instead of svn+ssh. As far as I know, that leaves them the choice of entering the password every time they talk to the SVN server, or storing it in an insecure way. Is there any way to get something like what gnome-keyring and kwallet provide in a non-GUI environment?

    Read the article

  • AD server within another network - DNS issues

    - by Harry Muscle
    Here's a quick summary of the environment I support: we have a domain (domain A) that has about 20 client computers. The domain server for this domain and all the clients sit within the network infrastructure of a larger domain (domain B). All the computers get their network settings via DHCP from domain B's servers. I have no control and am unable to make changes to anything to do with domain B. The problem I have is that currently in order for my domain's (domain A) clients to be able to resolve the domain server and the shares on it they have their DNS server IP address set to domain A's domain server (via the default GPO). Unfortunately when a laptop (windows and mac) gets taken home, they are still looking for the domain server as their DNS server and obviously can't access the internet correctly outside of our environment. Ideally I need a solution where the machines use domain A's domain server as their DNS when inside the office and use what ever DNS server DHCP gives them when they are outside the office. However, since I have no control over the office DHCP server, I'm not sure how this can be accomplished. Any help and advice that anyone can offer is highly appreciated. Thanks, Harry P.S. The solution I'm trying to find needs to require no involvement from the user.

    Read the article

  • iptables (NAT/PAT) setup for SSH & Samba

    - by IanVaughan
    I need to access a Linux box via SSH & Samba that is hidden/connected behind another one. Setup :- A switch B C |----| |---| |----| |----| |eth0|----| |----|eth0| | | |----| |---| |eth1|----|eth1| |----| |----| Eg, SSH/Samba from A to C How does one go about this? I was thinking that it cannot be done via IP alone? Or can it? Could B say "hi on eth0, if your looking for 192.168.0.2, its here on eth1"? Is this NAT? This is a large private network, so what about if another PC has that IP?! More likely it would be PAT? A would say "hi 192.168.109.15:1234" B would say "hi on eth0, traffic for port 1234 goes on here eth1" How could that be done? And would the SSH/Samba demons see the correct packet header info and work?? IP info :- A - eth0 - 192.168.109.2 B - eth0 - 192.168.109.15 - eth1 - 192.168.0.1 C - eth1 - 192.168.0.2 A, B & C are RHEL (RedHat) But Windows computers can be connected to the switch. I configured the 192.168.0.* IPs, they are changeable. Any help?

    Read the article

  • ssh X11 forwarding issue

    - by bbuser
    I have put ForwardX11 in my ~/.ssh/config and then I start a X11 application like this: ssh -f user@host 'someapp; sleep 1' This works fine. The application someapp has a button which opens a viewer application via a shell script viewer.sh. When I press the button the viewer comes up. This is all good and as expected, but if I do ssh -2 -f user@host 'someapp; sleep 1' there's trouble. someapp starts very well, but if I click the button the viewer doesn't show up. As the viewer is called via a shell script, I replaced the call with xclock and the situation was exactly the same - I think the viewer is not to blame. The situation is the same on Linux and AIX. The reason I need -2 is that I finally want to use connection multiplexing and this does only work with version 2. The reason for the sleep 1 is that it didn't work otherwise;-) To add more confusion, with ssh -2 -f user@host 'xterm &; app; sleep 1' the viewer works as long as the xterm is open. When I close xterm ssh -v outputs the following debug1: channel 1: FORCE input drain debug1: channel 0: free: client-session, nchannels 3 debug1: channel 1: free: x11, nchannels 2 and from that moment the viewer doesn't show when I press the button. I also replaced the viewer application with a script that writes the $DISPLAY variable to a file. The variable is always set correctly.

    Read the article

  • Powershell: Execute exe on remote server and capture output

    - by user364825
    I am trying to script the execution of an installer on remote web servers. The installer in question is also a Windows Service that hosts NServiceBus. If RDP'd into the server, the application is installed by the following command: &"$theInstaller" /install /serviceName:TheServiceName The installer prints output about its progress registering the service and connecting to the database to stdout, among other things. This works fine from an RDP session, but when I execute it remotely via PS, I get a you-can't-do-this-over-the-network message if I execute it directly or via Invoke-Command -computername $theRemoteServer: System.IO.FileLoadException: Could not load file or assembly 'file://\\theRemoteServer\c$ \thePath\AutoMapper.dll' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515) --- System.NotSupportedException: An attempt was made to load an assembly from a network location which would have caused the assembly to be sandboxed in previous versions of the .NET Framework. This release of the .NET Framework does not enable CAS policy by default, so this load may be dangerous. If this load is not intended to sandbox the assembly, please enable the loadFromRemoteSources switch. See http://go.microsoft.com/fwlink/?LinkId=155569 for more information. (Note: I added an additional "\" to the path in the first line in order to get it to show up correctly in the preview on this site.) This, and other DLLs, are loaded by the service, and the service's execution context cannot, apparently, be remotified. I have also tried using Invoke-WmiMethod, which does something, but it's not clear what, and the output from the installer is lost: Invoke-WMIMethod win32_process create '"$theInstaller" /install /serviceName:TheServiceName' -ComputerName $server (with and without cmd.exe /k before the intaller reference): __GENUS : 2 __CLASS : __PARAMETERS __SUPERCLASS : __DYNASTY : __PARAMETERS __RELPATH : __PROPERTY_COUNT : 2 __DERIVATION : {} __SERVER : __NAMESPACE : __PATH : ProcessId : ReturnValue : 9 How does one remotely execute such an EXE and capture the output? Thanks!

    Read the article

  • Are there any Microsoft Exchange Clients for iOS and Android that store their local data in an encrypted manner?

    - by Zac B
    I don't feel like this is a product recommendation question, more of a "does this tech even exist and is it feasible" question, but if I'm wrong, feel free to give this question the boot. Context: Our company has a bunch of traveling employees who access the company's Exchange server via thier iDevices or android phones, but because of the data protection laws in the state where our company is based (and the nature of the data our company works with), a recent security audit found that all mobile devices (laptops, phones, etc) operated by our company need to have all company correspondence and related data encrypted all the time. For laptops, that was easy: BitLocker or TrueCrypt, problem solved. For phones and tablets, however, I'm stumped. Sure, you can put lock screens/passwords on the phones, but the data is still accessible via external extraction, as law enforcement authorities already know. Question: Are there any clients for Microsoft Exchange that run on iOS or Android which store local data encrypted? The people using our mobile devices do a lot of their work while offline, so just giving them OWA access with SSL connection security isn't enough. Are there apps/technologies that present an additional login credential prompt to decrypt locally stored data in the app's storage area on the phone? My gut reaction when I started looking into this was "that doesn't sound like something Apple would allow into the App Store", but I've been wrong before...

    Read the article

  • Windows Server 2008 Remote Desktop printing blank pages

    - by Colin Pickard
    I have a Windows Server 2008 (not R2) machine which has problems with redirected printing. Clients connecting via Remote Desktop have their printers redirected and appearing for them to print to, but printing from applications on the server to local printers is giving blank pages, missing pages, or pages with headers/footers but no middle section. The issues are consistant for similar prints, but sometimes other prints and/or applications will work correctly. I have installed PDFCreator locally on the server, and the same print jobs sent by the same application appear correctly in the PDFs. Printing that PDF via the redirected printer prints correctly. I have tried the following: Installing drivers. I’ve installed several drivers different drivers, for both the client and server operating system and architecture, on the client and the server. Reinstalling the printers. I’ve tried reinstalling on remote print servers, the clients, and the host server, and tried different client machines. Granting everyone full permissions on the print spool folder on the server. Editing the registry to forward non-USB ports (http://support.microsoft.com/kb/302361) None of these have made any difference. The clients are using Windows 7 or Windows XP and none of them have any issues with printing locally. Any ideas? Thanks!

    Read the article

  • ubuntu: Installed php-mcrypt but it doesn't show up in phpinfo()

    - by jules
    A web app I'm trying to install on my ubuntu 10.04 LTS requires mcrypt, and is generating this error: Fatal error: Call to undefined function mcrypt_module_open(). I know this is the same question as this one: Installed php-mcrypt but it doesn't show up in phpinfo(), but I tried several things, none of which worked, and have additional questions. I would comment on the original thread but don't have enough reputation to do so; forgive me for the duplicate question. My versions of php and mcrypt are (both installed via apt-get): php: 5.3.2-1ubuntu4.10 mcrypt: 5.3.2-0ubuntu Doing a php -m shows that the mcrypt module is installed. I installed mcrypt and php5-mcrypt via apt-get. Also, I'm using nginx as my web server. I have tried reinstalling mcrypt and restarting nginx, but still can't get mcrypt to show up on phpinfo() and calls to mcrypt are still broken. Here is some more info: $ php -i | grep "mcrypt" /etc/php5/cli/conf.d/mcrypt.ini, mcrypt mcrypt support => enabled mcrypt.algorithms_dir => no value => no value mcrypt.modes_dir => no value => no value I also checked that mcrypt is on in /etc/php5/cli/conf.d/mcrypt.ini and /etc/php5/cgi/conf.d/mcrypt.ini. Lastly, I'm using fastCGI with nginx. I googled around and saw suggestions to restart php5-fpm. I couldn't find php5-fpm in apt-get, I'm not sure if I still need php5-fpm since I already have fastCGI. Is there anything else I'm missing?

    Read the article

  • using pf for packet filtering and ipfw's dummynet for bandwidth limiting at the same time

    - by krdx
    I would like to ask if it's fine to use pf for all packet filtering (including using altq for traffic shaping) and ipfw's dummynet for bandwidth limiting certain IPs or subnets at the same time. I am using FreeBSD 10 and I couldn't find a definitive answer to this. Googling returns such results as: It works It doesn't work Might work but it's not stable and not recommended It can work as long as you load the kernel modules in the right order It used to work but with recent FreeBSD versions it doesn't You can make it work provided you use a patch from pfsense Then there's a mention that this patch might had been merged back to FreeBSD, but I can't find it. One certain thing is that pfsense uses both firewalls simultaneously so the question is, is it possible with stock FreeBSD 10 (and where to obtain the patch if it's still necessary). For reference here's a sample of what I have for now and how I load things /etc/rc.conf ifconfig_vtnet0="inet 80.224.45.100 netmask 255.255.255.0 -rxcsum -txcsum" ifconfig_vtnet1="inet 10.20.20.1 netmask 255.255.255.0 -rxcsum -txcsum" defaultrouter="80.224.45.1" gateway_enable="YES" firewall_enable="YES" firewall_script="/etc/ipfw.rules" pf_enable="YES" pf_rules="/etc/pf.conf" /etc/pf.conf WAN1="vtnet0" LAN1="vtnet1" set skip on lo0 set block-policy return scrub on $WAN1 all fragment reassemble scrub on $LAN1 all fragment reassemble altq on $WAN1 hfsc bandwidth 30Mb queue { q_ssh, q_default } queue q_ssh bandwidth 10% priority 2 hfsc (upperlimit 99%) queue q_default bandwidth 90% priority 1 hfsc (default upperlimit 99%) nat on $WAN1 from $LAN1:network to any -> ($WAN1) block in all block out all antispoof quick for $WAN1 antispoof quick for $LAN1 pass in on $WAN1 inet proto icmp from any to $WAN1 keep state pass in on $WAN1 proto tcp from any to $WAN1 port www pass in on $WAN1 proto tcp from any to $WAN1 port ssh pass out quick on $WAN1 proto tcp from $WAN1 to any port ssh queue q_ssh keep state pass out on $WAN1 keep state pass in on $LAN1 from $LAN1:network to any keep state /etc/ipfw.rules ipfw -q -f flush ipfw -q add 65534 allow all from any to any ipfw -q pipe 1 config bw 2048KBit/s ipfw -q pipe 2 config bw 2048KBit/s ipfw -q add pipe 1 ip from any to 10.20.20.4 via vtnet1 out ipfw -q add pipe 2 ip from 10.20.20.4 to any via vtnet1 in

    Read the article

  • Cannot set up dual monitors correctly in Fedora15 with KDE.

    - by adivasile
    I have 2 monitors: 24" LCD connected via DVI(primary) 19" LCD connected via VGA(secondary) Everytime Fedora starts the second display is always set to clone the first one and they both run at 1280x1024 and I always have to disable the 19" monitor, in order for the bigger one to run at 1920x1080. I want to set them up so that my secondary monitor extends the primary one.The problem is that no matter what kind of configuration I choose it has no effect.My secondary monitor remains disabled. I've tried using both the Display manager from KDE and the ATI Control Panel and the behaviour is always the same.The moment I click apply, the screen flickers and nothing changes. I've succesfully used the extended setup in Fedora15 with Gnome3. I have a RadeonHD 4300 series videocard and I'm using the drivers downloaded from the AMD site. This is the output of xrandr -q : Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 1920 x 1920 VGA-0 connected (normal left inverted right x axis y axis) 1280x1024 75.0 60.0 1280x960 60.0 1152x864 75.0 1024x768 75.0 70.1 66.0 60.0 832x624 74.6 800x600 72.2 75.0 60.3 56.2 640x480 75.0 72.8 66.7 59.9 720x400 70.1 DVI-0 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 477mm x 268mm 1920x1080 60.0*+ 60.0 1680x1050 59.9 1600x900 60.0 1280x1024 75.0 60.0 1280x960 60.0 1152x864 75.0 1280x720 60.0 1152x720 60.0 1024x768 75.0 60.0 832x624 74.6 800x600 75.0 60.3 640x480 75.0 59.9 720x400 70.1 Later edit: The problem seems to come from the ATI drivers.I managed to set up the monitors like I wanted after I uninstalled the drivers. Unfortunately I'm working on an OpenCL project so I had to reinstall them.The moment I did that, all my previous settings were forgotten and I was back to square one.

    Read the article

  • Joining two routers together, but I have no access to the second router, although I know it's IP address and Gateway

    - by JohnnyVegas
    I have temporarily moved into a rented apartment for 4 months, which has wireless. The trouble I am having is that the access points here are wifi only and no RJ45 and I need to use RJ45 to connect some equipment that I am working with. I have purchased an RT-N66U and installed Tomato (shibby ver. 1.28) and successfully replaced the existing access point, but now I want to enable the access point that I have replaced as it links wirelessly to 3 others. Can I plug in a cable from the access point to my RT-N66U and get it to access the internet via my router? I have no access to the existing wireless access point, and don't want to reset it as it's not mine. There is another router situated in the roof somewhere which I also have no access to, but it's supplying my RT-N66U internet and I most definitely have a double-nat, which although isn't the best way of doing things I am limited with what I can do. Any suggestions on routing tables, vlans etc would be helpful, but I have no experience in these fields before - but I know the tomato firmware can cater for this. My router is set to IP 10.0.1.1 and dhcp is 10.0.1.100-200 The wireless access point address was 192.168.1.2 but this was assigned by the router in the roof which has the address 192.168.1.1. There is a cable from this router going to a wall socket which I now have my RT-N66u attached to via the WAN port. I understand it's scruffy and it isn't the way to do things but I have tried to ask for the admin details but as the wireless network is looked after by a third party and nobody knows their details I am stuck with this dilemma. I could buy three wireless access points and replace the existing but this isn't what I want to do, and although I have installed plenty of DD-WRT wireless repeater bridges they simply don't work here for some unknown reason. The phone line here is very noisy too and I don't have the rights to install ADSL in a building that isn't mine, and 3G coverage isn't good enough either. Thanks for your time

    Read the article

  • Is there a way to "burn" audio to an ISO? (as an audio CD)

    - by Sootah
    I have an audiobook that I've downloaded via their download manager, and it's loaded into their cutesy little audio program that they force you to use. I can play the book just fine using their proprietary software, and while it's annoying when using my PC, it's utterly UNBEARABLE when I try to listen to it on my Blackberry. The program is INSANELY slow, it literally takes around 30 seconds to switch between tracks, so if I've forgotten where I am in the book it takes me around 15 minutes to finally get to where I was at. I've looked everywhere on how to transcode the book to .MP3, but evidently with their current format it's either extremely convoluted (and I have no desire to dick around with installing some older version of the codec, getting a different transcoding app, and then wrestling with getting it to actually work). Since I'm able to burn a copy of the book to an audio CD, I figure the best way to go about this is to just make the CDs and then rip them off of those to .MP3. In order to avoid wasting two hours, not to mention 14 CD-R's, I was wondering if there's a way to "burn" to an .ISO instead of an actual CD-R. I currently have SlySoft's Virtual CloneDrive installed, so I can mount .ISO's easily enough, but now I want to actually create an ISO via the CD burning process. Just in case I've not explained myself very well, here is an overview of what I intend to do: "Burn" a set of Audio CD .ISOs from the audiobook (hopefully I can do this using Windows Media Player, otherwise I'll be forced to use the audiobook app) Mount an .ISO in Virtual CloneDrive Rip the audio tracks on the mounted .ISO to .MP3s Repeat steps 2-3 until the entire book is in .MP3 format Copy .MP3s to my Blackberry so that I'm not driven insane every time I want to listen to the book in the car, and be able to use Winamp when listening on my computer EDIT: I'd suppose a rather concise way to put it is that I need something that will emulate a CD-R drive, so that you can select it as the output drive in whatever app your burning the audio CD from. (I'd suppose that when you "insert a blank CD-R" the app would then ask you what file to save to)

    Read the article

  • Exchange ActiveSync Exception

    - by Dmeglio
    One of the users on my network is having an issue with his iPhone syncing via ActiveSync. Overall it's working, but every now and then he gets a "Synchronization with your iPhone failed for 3 items." I asked him to go into OWA and turn on the Mobile Phone logging. I looked through the logs and this is what stood out to me: SyncCommand_GenerateResponsesXmlNode_AddChange_Exception : Microsoft.Exchange.Data.Storage.PropertyErrorException: Property: [{00062008-0000-0000-c000-000000000046}:0x8501] ReminderMinutesBeforeStartInternal, PropertyErrorCode: NotFound, PropertyErrorDescription: . at Microsoft.Exchange.Data.Storage.PropertyBag.ThrowIfPropertyError(StorePropertyDefinition propertyDefinition, Object propertyValue) at Microsoft.Exchange.Data.Storage.StoreObject.GetProperty(PropertyDefinition propertyDefinition) at Microsoft.Exchange.Data.Storage.MeetingMessage.get_Item(PropertyDefinition propertyDefinition) at Microsoft.Exchange.AirSync.SchemaConverter.XSO.XsoMeetingRequestProperty.get_NestedData() at Microsoft.Exchange.AirSync.SchemaConverter.AirSync.AirSyncMeetingRequestProperty.InternalCopyFrom(IProperty srcProperty) at Microsoft.Exchange.AirSync.SchemaConverter.AirSync.AirSyncProperty.CopyFrom(IProperty srcProperty) at Microsoft.Exchange.AirSync.SchemaConverter.AirSync.AirSyncDataObject.CopyFrom(IProperty srcRootProperty) at Microsoft.Exchange.AirSync.SyncCollection.ConvertServerToClientObject(ISyncItem syncItem, XmlNode airSyncParentNode, SyncOperation changeObject) at Microsoft.Exchange.AirSync.SyncCollection.GenerateCommandsXmlNode(XmlDocument xmlResponse, IAirSyncVersionFactory versionFactory, String deviceType, ProtocolLogger protocolLogger, MailboxLogger mailboxLogger) Does anyone have any idea what might cause this? We have 4 iPhone users connected to our Exchange via ActiveSync. Right now, this seems to be the only user experiencing this issue. I'd appreciate any help anyone can provide. Thanks.

    Read the article

  • Windows 8 auto-hibernate from sleep not working on Retina MacBook Pro

    - by frenchglen
    I have a similar question to this one. Only my context is the 15" Retina MacBook Pro - and Windows 8. I have just the original Mac OS X Mountain Lion on there, then Windows 8 via Bootcamp. no rEFIt installed. (I just press ALT every time I restart windows, actually as a security measure to stop tech-unsavvy thugs, who, if the laptop is stolen, think it's only a mac and don't discover my Windows as quickly as they would've, and by that time I remotely activate various anti-theft mac apps and nab them that way). SO: like the related question asks, why isn't it behaving like it should? The Windows 7 FAQ states: Will sleep eventually drain my laptop battery? If your laptop battery charge gets critically low while the computer is asleep, Windows automatically puts the laptop into hibernation mode. But this is just not happening - on my rMBP Windows 8. It seems EVERY time I set the laptop to sleep (when it reaches 10%), then arriving home and plugging it in and hoping to simply resume my work, it does NOT save the session to disk and I lose ALL my work. Who's fault is it? Win 8's (a bug, grr)? Or Apple's EFI system (maybe fixable via editing EFI options/do I have to install refit to make it work perhaps?) Or maybe changing windows power options can somehow fix the problem? Thanks for your help.

    Read the article

  • Add Route for machine in same DC

    - by gary
    My routing table on my machine with IP of 46.84.121.243 currently looks like this - Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 46.84.121.225 46.84.121.243 21 46.84.121.224 255.255.255.224 On-link 46.84.121.243 276 46.84.121.239 255.255.255.255 On-link 46.84.121.243 21 46.84.121.243 255.255.255.255 On-link 46.84.121.243 276 46.84.121.255 255.255.255.255 On-link 46.84.121.243 276 I'm trying to access 46.84.121.239, which is my other machine in the same DC but my guess is the first rule is blocking it as it is trying to go via the gateway and failing - Tracing route to [46.84.121.239] over a maximum of 30 hops: 1 OWNEROR-9O83HBL [46.84.121.243] reports: Destination host unreachable. Trace complete. I'm doing all this via RDP and already tried changing the metric on the persistent rule with devastating consequences! Here's the persistent rule (working) - Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 46.84.121.225 1 Any help to be able to access the 46.84.121.243 would be very helpful thanks very much.

    Read the article

  • Variable directory names over SCP

    - by nedm
    We have a backup routine that previously ran from one disk to another on the same server, but have recently moved the source data to a remote server and are trying to replicate the job via scp. We need to run the script on the target server, and we've set up key-based scp (no username/password required) between the two servers. Using scp to copy specific files and directories works perfectly: scp -r -p -B [email protected]:/mnt/disk1/bsource/filename.txt /mnt/disk2/btarget/ However, our previous routine iterates through directories on the source disk to determine which files to copy, then runs them individually through gpg encryption. Is there any way to do this only by using scp? Again, this script needs to run from the target server, and the user the job runs under only has scp (no ssh) access to the target system. The old job would look something like this: #Change to source dir cd /mnt/disk1 #Create variable to store # directories named by date YYYYMMDD j="20000101/" #Iterate though directories in the current dir # to get the most recent folder name for i in $(ls -d */); do if [ "$j" \< "$i" ]; then j=${i%/*} fi done #Encrypt individual files from $j to target directory cd ./${j%%}/bsource/ for k in $(ls -p | grep -v /$); do sudo /usr/bin/gpg -e -r "Backup Key" --batch --no-tty -o "/mnt/disk2/btarget/$k.gpg" "$/mnt/disk1/$j/bsource/$k" done Can anyone suggest how to do this via scp from the target system? Thanks in advance.

    Read the article

  • Networking problems in VMWare with wireless bridge

    - by Robert Koritnik
    Barebone data: virtualization: VMWare Workstation 6.5 (latest) Host: Windows Server 2008 x64 Guest: Windows Server 2008 x86 Host network adapter: wireless Guest network adapter 1: over Bridge VMNet (automatic) Guest network adapter 2: over Host only VMNet Problem When I surf the net within VM my internet connection just gets stalled (not dropped). It doesn't experience any timeout whatsoever, it just stops downloading/communicating. For instance: I start downloading a file with a browser (IE/FF/CR doesn't matter) and I have to pause/restart download when speed drops to 0. I could wait indefinitelly but connection won't pickup automatically. What did I miss in my network configuration? Update 1 I've tested this in various combinations. This works fine when host is connected via Ethernet. But when connected via Wifi, the connection on the guest works as previously described. It connects fine. It gets a valid IP from DHCP... Everything is cool as long as you don't start doing some intensive network traffic (ie. download a 2MB file) In this case it starts downloading and stops after a while. Speed just drops to 0B/s... Sometimes it picks up back, sometimes it doesn't. Connection still stays and works. I can ping around with no problem.

    Read the article

  • MacOSX: remove write-protect flag from file in Terminal

    - by Albert
    Hi, I have a file on a FAT32 volume which is shown as write-protected in Finder (so I cannot move it). Removing that write-protected flag in the information dialog works just fine. However, I have many more such files and I thus want to do it via Terminal. I already tried via 'chmod +w' but that didn't worked. 'ls -la' showed me that they are already just fine ("-rwxrwxrwx 1 az az " where az is my user account). Then I thought this might be stored in some xattr properties but 'xattr -l' didn't gave me any entry. Then I thought this might be some ACL setting (whereby I thought they would be stored as xattr but let's try it anyway) - and some Google search returned me something with 'chmod -a' or 'chmod -i' or so. All these tries only give me chmod: No ACL currently associated with file" or chmod: Failed to set ACL on file...: Operation not permitted". But I definitly have no write access to the file because I cannot move it or do any other change to it (in Terminal). Removing the write-access flag in Finder solves that.

    Read the article

  • How to create VirtualHost in Ubuntu 12.10

    - by Mifas
    I had followed many articles to 'How to create VirtualHost in Ubuntu'. This is what have I done Installed Apache sudo apt-get install lamp-server^ phpmyadmin I created folder called site1.com in /var/www/ Then I have created the file in /etc/apache2/sites-available/site1.com Then added the following code to that site1.com file <VirtualHost *:80> ServerName www.site1.com ServerAdmin [email protected] ServerAlias site1.com DocumentRoot /var/www/site1.com # Other directives here <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/site1.com/> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny Allow from all </Directory> </VirtualHost> Then after that I edit the host file added the following line of code 127.0.0.1 site1.com Edit Also I enable the site1.com via sudo a2ensite site1.com Then i restart the apache serivice. (Even i restarted the pc) When I go to the site1.com, It will say The connection has timed out Error Message. But I can browse via localhost/site1.com. I have been trying since last two days. No solution. And followed many articles and videos.

    Read the article

  • Accessing SSH_AUTH_SOCK from another non-root user

    - by Danny F
    The Scenario: I am running ssh-agent on my local PC, and all my servers/clients are setup to forward SSH agent auth. I can hop between all my machines using the ssh-agent on my local PC. That works. I need to be able to SSH to a machine as myself (user1), change to another user named user2 (sudo -i -u user2), and then ssh to another box using the ssh-agent I have running on my local PC. Lets say I want to do something like ssh user3@machine2 (assuming that user3 has my public SSH key in their authorized_keys file). I have sudo configured to keep the SSH_AUTH_SOCK environment variable. All users involved (user[1-3]), are non privileged users (not root). The Problem: When I change to another user, even though the SSH_AUTH_SOCK variable is set correctly, (lets say its set to: /tmp/ssh-HbKVFL7799/agent.13799) user2 does not have access to the socket that was created by user1 - Which of course makes sense, otherwise user2 could hijack user1's private key and hop around as that user. This scenario works just fine if instead of getting a shell via sudo for user2, I get a shell via sudo for root. Because naturally root has access to all the files on the machine. The question: Preferably using sudo, how can I change from user1 to user2, but still have access to user1's SSH_AUTH_SOCK?

    Read the article

  • OpenVPN: ERROR: could not read Auth username from stdin

    - by user56231
    I managed to setup openvpn but now I want to integrate a user/pass authentication method so, even though I haven't added the auth-nocache in the server config, whenever I try to connect it returns with the following message on the client side: ERROR: could not read Auth username from stdin My server.conf file contains basic stuff, everything works up untill I try to implement this for of authentication. mode server dev tun proto tcp port 1194 keepalive 10 120 plugin /usr/lib/openvpn/openvpn-auth-pam.so login client-cert-not-required username-as-common-name auth-user-pass-verify /etc/openvpn/auth.pl via-env ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem user nobody group nogroup server 10.8.0.0 255.255.255.0 persist-key persist-tun #persist-local-ip status openvpn-status.log verb 3 client-to-client push "redirect-gateway def1" push "dhcp-option DNS 10.8.0.1" log-append /var/log/openvpn comp-lzo I searched all over the net for a solution and all answers seems to be related to the auth-nocache param which I haven't set. The directive auth-user-pass-verify /etc/openvpn/auth.pl via-env points to a script which is executed to perform the authentication. A false authentication should result in a exit 1 while a true one should result with exit 0. For testing, that script auth.pl returns exit 0 no matter what the input is but it seems that the file is not executed before the error raises. auth.pl file contents: #!/usr/bin/perl my $user = $ENV{username}; my $passwd = $ENV{password}; printf("$user : $passwd\n"); exit 0; Any ideas?

    Read the article

  • Get Squid to pass X-Requested-With header

    - by tftd
    I have configured a squid 3.1 proxy server. Everything works great except for the X-Requested-With header. I can't manage to figure out how to pass that header to the site I'm attempting to open via the proxy. This is my current configuration: request_header_access Allow allow all request_header_access Authorization allow all request_header_access WWW-Authenticate allow all request_header_access Proxy-Authorization allow all request_header_access Proxy-Authenticate allow all request_header_access Cache-Control allow all request_header_access Content-Encoding allow all request_header_access Content-Length allow all request_header_access Content-Type allow all request_header_access Date allow all request_header_access Expires allow all request_header_access Host allow all request_header_access If-Modified-Since allow all request_header_access Last-Modified allow all request_header_access Location allow all request_header_access Pragma allow all request_header_access Accept allow all request_header_access Accept-Charset allow all request_header_access Accept-Encoding allow all request_header_access Accept-Language allow all request_header_access Content-Language allow all request_header_access Cookie allow all request_header_access Mime-Version allow all request_header_access Retry-After allow all request_header_access Title allow all request_header_access Connection allow all request_header_access User-Agent allow all request_header_access All deny all #remove all other headers # delete "x-forwarder-for.." headers forwarded_for delete request_header_access Via deny all request_header_access X-Forwarded-For deny all I tried to add this line request_header_access X-Requested-With allow all to the configuration but apparently X-Requested-With is an unknown header name... Apparently I'm missing something?

    Read the article

< Previous Page | 443 444 445 446 447 448 449 450 451 452 453 454  | Next Page >