Search Results

Search found 30087 results on 1204 pages for 'default package'.

Page 755/1204 | < Previous Page | 751 752 753 754 755 756 757 758 759 760 761 762  | Next Page >

  • Two VPN (internet) connections rounting (win2003)

    - by tmp3128
    Here is my setup: - win2003 server (ISA installed) with 3 NICs:   1) internal network   2) ISP 1 (default) network (DHCP enabled)   3) ISP 2 (backup) network (DHCP enabled) - several "normal" PC within internal net - one "special" PC within internal net Both ISP 1 and ISP 2 provide access to internet and their resources thru their VPN connections. The goal is to enable all "normal" PCs to use internet from ISP_1's VPN connection and "special" should use only ISP_2's VPN connection. Futhermore all "normal" and "special" PCs should have access to several servers accesible only thru ISP_2's VPN connection. I have some thoughts how to achieve this but I want to be certain because everything should be configured as quickly as posible, avoiding significant downtime. windows-server-2003 isa routing vpn

    Read the article

  • Using 64 bit wuauclt from 32 bit command prompt

    - by Tim Brigham
    I have a script that for legacy reasons needs to run inside a 32 bit command shell. This script also includes references to certain core windows binaries - most notably wuauclt but others as well - which are not accessible by default within the 32 bit environment. This script is being run in several locations including many windows 7 and server 2008 r2 boxes. I'm aware of the possibility to copy files from the system32 to syswow64 in order to get around this. Is there any better method - something along the lines of adding an entry to the path variable - which will allow me to fall back to these 64 bit binaries from within a 32 bit script?

    Read the article

  • Webcam microphone input in Gnome/pulseaudio

    - by sdaau
    Just got a "Trust" webcam, which gets recognized on my Ubuntu Lucid. It has a built in microphone - which also gets recognized - however, I cannot really get it to act as the system microphone input? Here are some screenshots of what is shown by gnome-volume-control: The default window shows Trust webcam - which has two profiles: "Analog Mono Input" and "Off" - of course, I have it on "Analog Mono Input": However, on the "Input" tab - there is no matching "device for sound input" - neither a matching connector: Then I installed pavucontrol - but that doesn't show that much more; it tells first that gnome-volume-control reads from "Internal Audio Analog Stereo": Then in "Input devices" tab, there is again nothing resembling the mic input from webcam: Finally, under "Configuration" tab, the "Trust" webcam shows, but even if its profile is on "Analog Mono Input", nothing much happens:   So, does anyone know how I could get this webcam microphone to be recognized as the system input? Many thanks in advance for any answers, Cheers!

    Read the article

  • Apache does not serve non-locally

    - by yodaj007
    I have a freshly installed instance of Fedora Core 16 inside VirtualBox using bridged networking. On it, as root I typed in: yum -y install httpd service httpd start ifconfig Inside the VM, I can open a web browser to 'localhost' and I get the Apache test page. It works. But in Windows (the machine hosting the VM), I point my browser to the IP address returned by ifconfig (192.168.2.122). The connection times out. I can go to a command prompt and ping the VM. Is there a firewall or something that comes with Fedora by default? Or is there something I need to change in a config file?

    Read the article

  • SharePoint Thoughts

    - by Tim Murphy
    I was listening to .NET Rocks episode #713 and it got me thinking about a number of SharePoint related topics. I have been working with SharePoint since the 2001 product came out and have watched it evolve over the years.  Today SharePoint is one of the most powerful and flexible products in the market.  Of course that doesn’t mean there isn’t room for improvement (a lot of improvement in fact) and with much power comes much responsibility. My main gripe these days is that you have to develop on a server instance.  This adds a real barrier to entry for developers.  You either have to run VMWare or Hyper-V on your developer machine or actually develop on your dev server for most tasks.  Yes, there is a way to setup a Windows 7 machine with the SharePoint components but it is very hackish. Beyond that the tools in VS2010 are a great leap forward from past generations.  Not requiring a separate package creation tool is not the least of the improvements.  Better workflow and web part development have also eased the burden of many developers. The other thing the show brought up in my thoughts was more around usage.  Users want to be able to self server everything without regard to what affect that has on leveraging their data from a corporate perspective.  My coworkers who work on Lotus Notes ask why the user can’t just do what ever they want? Part of the reason is that those features have not been built, but the other part is that giving them those features is often like giving an infant a loaded hand gun.  You can do it but it doesn’t make it the smart thing to do. As with any tool that is going to be used in the enterprise it should be subject to governance.  If controls are not in place as they said in the episode of DNR the document libraries and I believe SharePoint in general starts to look as disarrayed and unusable as a shared drive.  Consider these factors before giving into every whim of the users.  You should be able to explain to them the tradeoffs of giving them full control versus being able to leverage the information they collect to the benefit of the organization. These are just a couple of the thoughts that were triggered by the show.  I’m sure there are more discussions that can be had.  Feel free to leave your comments about the pros and cons of SharePoint. del.icio.us Tags: .NET Rocks,SharePoint,software development

    Read the article

  • best way to host multiple wordpress site on single vps [migrated]

    - by Ben
    Not sure if this is webmaster or a WordPress question, it's a bit half and half, sorry if I'm posting in the wrong place. Without using Multi-Site or installing new WordPress CMS' in second-level domains, what's the best way to get multiple WordPress installs running on my VPS (running Linux powered CentOS 6 with WHM and cPanel)? It's currently working but only by setting the permalinks option to the default setting, so the URLs aren't human-friendly. I have come across something called WPSiteStack, though I'd really rather not go down this route. Long story short, I need the following: Seperate installs so one core / theme / plugin update doesn't affect all sites and increases security of all sites; 'Pretty' permalinks; Each WordPress install must be in the root of it's own domain to ensure that I can accurately measure my clients' quotas; It may also be worth noting that some functions within each install use the $_SERVER['DOCUMENT_ROOT'] and $_SERVER['HOST'] variables. I have already edited the httpd-vhosts.conf, httpd.conf and .htaccess files but this hasn't made any changes. So any ideas what I'm missing or doing wrong? Any help is much appreciated.

    Read the article

  • DDNS Not Creating Journal (Dhcpd and Named)

    - by user130094
    * EDIT 1 * After monkeying with additional debug logging I see some log entries of interest. 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: journal rollforward failed: no more 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: not loaded due to errors. ^^^ If I can remedy the above messages I think I'll be good to go ^^^ * EDIT 2 * Grasping at straws I touched a forward and a reverse zone journal file and restarted named. Boom! Works. Despite documentation stating the files are created automatically and what I have seen before... dunno why but that did the trick. Also re-checked perms on the dir the files live in. As certain as I was, they were correct with named having rw. CentOS 6 (final) dhcpd 4.1.1-P1 named BIND 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6 Basic DHCP and DNS functionality are in place on 192.168.111.2. Clients are assigned addresses as intended and can resolve local DNS names as well as Internet names. My problem is that named's zone journal files are not created. chroot: /var/named/chroot I tried placing the zone files in various directories (/var/named/data, /var/named, /var/named/dynamic - no matter which dir with named owning and wide open perms I now get nowhere). Along the way I, at one point, got a permission denied when named tried to create the journal. Resolved the issue by: chown --recursive named:named /var/named chmod --recursive 777 /var/named The journal was then created and here's where things fell apart. I attempted to tame permissions to something more sane and broke it. Once changed and having restarted named it threw an error indicating the journal was out of sync (or something to that affect)... didn't matter since this is a new setup so I deleted it and now it is not recreated. Now though I see no errors in /var/log/messages, my chrooted /var/log/named.log, or chrooted /var/log/named.debug. I increased the debug level with 'rndc trace' - no love. Increased trace to 10, still nothing. SELinux is disabled... [root@server temp]# sestatus SELinux status: disabled dhcpd.conf... allow client-updates; ddns-update-style interim; subnet 192.168.111.0 netmask 255.255.255.224 { ... key dhcpudpate { algorithm hmac-md5; secret LDJMdPdEZED+/nN/AGO9ZA==; } zone example.lan. { primary 192.168.111.2; key dhcpudpate; } } named.conf... key dhcpudpate { algorithm hmac-md5; secret "LDJMdPdEZED+/nN/AGO9ZA=="; }; zone "example.lan" { type master; file "/var/named/dynamic/example.lan.db"; allow-transfer { none; }; allow-update { key dhcpudpate; }; notify false; check-names ignore; }; The following shows /var/log/named.log output of named starting up - no errors. 27-Jul-2012 21:33:39.349 general: info: zone 111.168.192.in-addr.arpa/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.349 general: info: zone example.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example2.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example3.lan/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.350 general: info: zone example4.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: zone example5.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: managed-keys-zone ./IN/internal: loaded serial 0 27-Jul-2012 21:33:39.351 general: info: zone example.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example1.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example2.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example3.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.353 general: info: managed-keys-zone ./IN/external: loaded serial 0 27-Jul-2012 21:33:39.353 general: notice: running 27-Jul-2012 21:34:03.825 general: info: received control channel command 'trace 10' 27-Jul-2012 21:34:03.825 general: info: debug level is now 10 ...and /var/log/messages for a named start... Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: BIND 9 is maintained by Internet Systems Consortium, Jul 27 23:02:04 server named[9124]: Inc. (ISC), a non-profit 501(c)(3) public-benefit Jul 27 23:02:04 server named[9124]: corporation. Support and training for BIND 9 are Jul 27 23:02:04 server named[9124]: available at https://www.isc.org/support Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: adjusted limit on open files from 4096 to 1048576 Jul 27 23:02:04 server named[9124]: found 2 CPUs, using 2 worker threads Jul 27 23:02:04 server named[9124]: using up to 4096 sockets Jul 27 23:02:04 server named[9124]: loading configuration from '/etc/named.conf' Jul 27 23:02:04 server named[9124]: using default UDP/IPv4 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: using default UDP/IPv6 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: listening on IPv4 interface eth0, 192.168.111.2#53 Jul 27 23:02:04 server named[9124]: generating session key for dynamic DNS Jul 27 23:02:04 server named[9124]: sizing zone task pool based on 12 zones Jul 27 23:02:04 server named[9124]: set up managed keys zone for view internal, file 'dynamic/3bed2cb3a3acf7b6a8ef408420cc682d5520e26976d354254f528c965612054f.mkeys' Jul 27 23:02:04 server named[9124]: set up managed keys zone for view external, file 'dynamic/3c4623849a49a53911c4a3e48d8cead8a1858960bccdea7a1b978d73ec2f06d7.mkeys' Jul 27 23:02:04 server named[9124]: command channel listening on 127.0.0.1#953 What can I do to troubleshoot this further? It almost seems as though dhcpd is not triggering the update. Maybe I should troubleshoot here and, if so, how? Many thanks.

    Read the article

  • Booting Ubuntu 13.10 form USB on server with no OS (Dell PowerEdge T110)

    - by user35581
    I have a Dell PowerEdge T110 (Xeon 1220v2) server with no OS. From my mac, I was able to save the Ubuntu 13.10 for server iso (x86) and used UNetbootin to save the iso to a USB stick (2GB). I was hoping to boot the server from USB, and all seemed to be going well, BIOS even detected my USB drive when I plugged it in, but for some reason I'm getting a "Missing operating system" error. I checked the USB drive and it appears that UNetbootin put the correct files on it form a cursory glance (although I'm not entirely sure what I should be looking for). Should I be able to boot a server with no OS from a UNetbootin created Ubuntu 13.10 USB? And if so, why might BIOS not find the right files? I had read that there is a USB Emulation setting in BIOS, but I haven't been able to find this in the menus. My understanding is that by default, this is set to on. I might try wiping the USB stick and running UNetbootin again. The USB is formatted to MS-DOS FAT16 (should it be formatted some other way?).

    Read the article

  • ADF Faces Skin Editor - How to Work with It

    - by Shay Shmeltzer
    The ODTUG Kscop11 conference was a great success with lots of sessions about FMW running in a special track. I did several sessions and labs in the conference, and I thought it might be a good idea to at least give you a taste of what you might have missed. So here is most of what I demoed in my ADF Faces Skinning session (not all though - that session was 60 minutes long, and while everyone did end up going out of the building in the middle because of a fire drill for about 5 minutes, there was other things covered in the session as well). In the demo here you'll see how to generate new images and default color scheme, how to identify a component class with Firebug, how to skin a component, how to identify the global selector of a property, how to change fonts and how to change strings. By the way, for more on ADF Skinning you should also listen to the ADF Insider seminar that Frank Nimphius recorded on skinning, it will give you better understanding of the overall skinning process. P.S. in the demo I add an entry to the web.xml file which prevent ADF Faces from compressing the HTML that is generated. The entry is for org.apache.myfaces.trinidad.DISABLE_CONTENT_COMPRESSION  and I set it to true. This is very useful when you work on creating the skin, but don't forget to un-set it before you go production.

    Read the article

  • Server 2003 RAS Server Utilising High WAN Traffic

    - by Joe Sergeant
    We have Routing and Remote Access configured on Server 2003 (also our primary domain controller), allowing users to connect in remotely to access files, email, etc. With one user, the RAS Server is constantly sending data to that user's remote computer. From 9am this morning it has transferred almost 800MB. The user isn't transferring any files remotely, certainly not enough to total 800MB anyway. None of the other remote users have had this issue. We have ensured that the user in question has "Use default gateway on remote network" disabled for both IPv4 and IPv6 and we are fairly confident that Offline Files isn't trying to synchronise with the server remotely, too. My question is two-fold. Firstly, has anyone had a similar experience? Secondly, what would be the best software to discover exactly what data is being sent to the remote user?

    Read the article

  • SDL mouse wheel not picking up

    - by Chris
    Running Ubuntu 11.04, SDL 1.2 trying to pickup mouse wheel up/down movement with this (stripped down) code: int main( int argc, char **argv ) { SDL_MouseButtonEvent *mousebutton = NULL; while ( !done ) { if(mousebutton != NULL && mousebutton->button == SDL_BUTTON_LEFT) yrot += 0.75f; else if(mousebutton != NULL && mousebutton->button == SDL_BUTTON_RIGHT) yrot -= 0.75f; else if(mousebutton != NULL && mousebutton->button == SDL_BUTTON_WHEELUP){ xrot += 0.75f; }else if(mousebutton != NULL && mousebutton->button == SDL_BUTTON_WHEELDOWN){ xrot -= 0.75f; } while ( SDL_PollEvent( &event ) ) { switch( event.type ) { case SDL_MOUSEBUTTONDOWN: mousebutton = &event.button; break; case SDL_MOUSEBUTTONUP: mousebutton = NULL; break; default: break; } } } return 0; } strange thing is, scrolling with the mouse button does nothing, but if I hold down a mouse button or two and then move the mouse it hits the SDL_BUTTON_WHEEL code occasionally. This honestly reeks of a pointer issue, which would make sense since I've been spoiled with C# for the past couple years, but I am just not seeing it. How do i correctly find mouse scroll events in SDL?

    Read the article

  • How to switch to a generic kernel in a headless Ubuntu Server 12.04?

    - by chmike
    I just got a dedicated server with Ubuntu 12.04 installed with a custom compiled kernel. Since I would like to install VirtualBox and this custom kernel doesn't support dynamic module loading (for security) I need to change the kernel. I'm running some Ubuntu servers for years but never palyed with grub and a headless computer. When the command update-grub is run it shows the different kernel it finds. Here is what I see Generating grub.cfg ... Found linux image: /boot/bzImage-3.2.13-xxxx-grs-ipv6-64 Found linux image: /boot/vmlinuz-3.2.0-34-generic Found initrd image: /boot/initrd.img-3.2.0-34-generic No volume groups found done The first one is the active one as seen with uname -r. To me it looks like the second kernel is the one I should use. But I don't know how to configure grub2 to use it. The computer is also configured with a software RAID using mdadm I guess. Never used that before. I don't know if playing with the grub of changing kernel could brake this. What must I do to set the generic kernel as the default one so that I can get VirtualBox running.

    Read the article

  • What differences are there between an official Ubuntu AMI image and a base install from an ISO?

    - by David Winter
    When creating a new instance on AWS using an official Ubuntu 12.04 server AMI, what differences are there compared to if I was to do a standard server install on a computer of my own? For example, the default user is 'ubuntu'. An SSH public key is added to that users authorized_keys file. Sudo is passwordless for that user. PasswordAuthentication is disabled for SSH. etc etc. Configurations have been changed from their defaults, and I'd like to know if there is a list, or somewhere I could find out the modifications made.

    Read the article

  • Outlook 2010: using signatures stored on network

    - by Gregory MOUSSAT
    With Outlook before the 2010 version, it was possible to specify any path for the signatures. With Outlook 2010, the only way is to use those stored into C:\Documents and Setting\UserName\Local Settings\Application Datas\Microsoft\Signature\ I'd like to point the signatures to a network share. Allowing us to modify the signatures into the share, instead of login on every computers each time we are asked to modify them (and this is quite often because the signatures contain logos about current events). We currently use a script to copy the signatures from the share to the local disk when users login. Question: How to set Outlook 2010 to use signatures outside of the default signature folder ?

    Read the article

  • How would I batch rename a lot of files using command-line?

    - by Whisperity
    I have a problem which I am unable to solve: I need to rename a great dump of files using patterns. I tried using this, but I always get an error. I have a folder, inside with a lot of files. Running ls -1 | wc -l, it returns that I have like 160000 files inside. The problem is, that I wish to move these files to a Windows system, but most of them have characters like : and ? in them, which makes the file unaccessible on said Windows-based systems. (As a "do not solve but deal with" method, I tried booting up a LiveCD on the Windows system and moving the files using the live OS. Under that Ubuntu, the files were readable and writable on the mounted NTFS partition, but when I booted back on Windows, it showed that the file is there but Windows was unable to access it in any fashion: rename, delete or open.) I tried running rename 's/\:/_' * inside the folder, but I got Argument list too long error. Some search revealed that it happens because I have so many files, and then I arrived here. The problem is that I don't know how to alter the command to suit my needs, as I always end up having various errors like Trying find -name '*:*' | xargs rename : _, it gives xargs: unmatched single quote; by default quotes are special to xargs unless you use the -0 option [\n] syntax error at (eval 1) line 1, near ":" [\n] xargs: rename: exited with status 255; aborting Adding the -0 after xargs turns the error message to xargs: argument line too long These files are archive files generated by various PHP scripts. The best solution would be having a chance to rename them before they are moved to Windows, but if there is no way to do it, we might have a way to rename the files while they are moved to Windows. I use samba and proftpd to move the files. Unfortunately, graphical software are out of the question as the server containing the files is what it is, a server, with only command-line interface.

    Read the article

  • Is giving read permissions on /etc/shadow to apache user a wise decision from security point of view?

    - by Czar
    I have to use PAM authentication for DAV SVN, but when everything is configured as specified in mod_auth_pam documentation, authentication does not work. After some research I realized, that for this to work, httpd should be running under root user (which I don't like and won't implement) or apache user (under which httpd is running by default) should have permissions to read /etc/shadow file. So there is a pair of questions connected to each other which I want to ask: Is giving this permition to apache user a wise decision from security point of view? If answer to the first question is "yes", what is the correct way to do so? For now I've done following: groupadd shadow usermod -G shadow apache chmod g+r /etc/shadow Another way I can come up with is using acl: setfacl -m u:apache:r /etc/shadow Note: OS is Fedora 14 x86_64 (kernel: 2.6.35.11) httpd v2.2.17 mod_auth_pam v1.1.1

    Read the article

  • Windows bios fails to boot up, keeps resetting

    - by Dwayne Diamond
    When I turn the computer on this error always happens wher jus after it starts up this screen pops up wher it says F1 to enter BIOS setup or F2 to start Windows Normally and if I press F2 it goes into Windows but freezez then I hav 2 turn the pc off, Problem is the BIOS + Date and Time keeps resetting itself somehow and it's not the battery because I put a new one in so when I manage 2 load default settings and get it working after some struggle and I shut down the pc the next day I switch the pc on and does the whole process again... What could this problem be and how can I fix it??? Thank you

    Read the article

  • Resolution changes when using switch

    - by Edward D
    So, the "real" resolution of my monitor is 1024x768. That's what I'd use on my docked Windows laptop, and what I'd use on my Xubuntu desktop connected directly. When I connect a switch, to switch between the two, however, the ubuntu machine's resolution changes. Everything's still proportional, and it still thinks it's doing 1024x768, but the icons and fonts appear larger. Not 800x600 larger, but still big. When I used Xubuntu Precise, I created an xorg.conf file to set a resolution of 1280x1024 which made it look the way it does without the switch ... as a workaround. When I upgraded to Trusty, I lost this. I tried to re-create it, but doesn't seem to load my file. Ideally, I'd like to correct the original problem, but I'd settle for being able to up the resolution. I searched for a while, and tried to do it, but I'm giving up ... please help me out. Controller: Intel Corporation 82915G/P/GV/GL/PL/910GL Memory Controller Hub (rev 04) Monitor: NEC MultiSync LCD 1850e http://www.necdisplay.com/documents/UserManuals/LCD1850E_manual.pdf OS: Xubuntu 14.04 Trusty /etc/X11/xorg.conf: # YOU CREATED THIS FILE # sudo leafpad /etc/X11/xorg.conf Section "Device" Identifier "Configured Video Device" EndSection Section "Monitor" Identifier "NEC LCD1850E" # I found Synchronization Range at: # http://www.necdisplay.com/documents/UserManuals/LCD1850E_manual.pdf HorizSync 31.0-82.0 VertRefresh 55.0-85.0 EndSection Section "Screen" Identifier "Default Screen" Monitor "NEC LCD1850E" Device "Configured Video Device" SubSection "Display" Depth 24 Modes "1280x1024" "1280x960" "1024x768" "800x600" "848x480" "640x480" EndSubSection EndSection

    Read the article

  • Setting up phpMemCacheAdmin on CentOS 5.5

    - by Bill Smith
    I have been able to setup phpMemCacheAdmin (http://code.google.com/p/phpmemcacheadmin/) on CentOS and am able to view the localhost MemCache statistics however whenever I add other MemCached nodes the config is never changed. I am fairly certain it has something to do with permissions however I am unable to track down what exactly needs to be done, or how to do it. The install was pretty straightforward: wget http://phpmemcacheadmin.googlecode.com/files/phpMemCacheAdmin-1.1.3r161.tar.gz tar xvzf phpMemCacheAdmin-1.1.3r161.tar.gz chmod +w Config/Memcache.ini But, it also states that Apache has rw right in the temp file folder (default : Temp/) and the entire config directory (Config/) -- that is the part I am unsure of. Help!

    Read the article

  • Windows Azure openSUSE: rcnetwork not starting

    - by djechelon
    I have a Linux VM in Azure, created from their default image. My problem is simply that the init script network doesn't look like to start, so dependent services (apache, postfix...) won't start. If I run yast runlevel and try to start postfix it asks me to start network first: if I accept, network is started without errors and then postfix is started. While network is configured to start on boot, it just doesn't appear to have started. Anyway, SSH connections work fine. Currently, I had to edit my init scripts and remove network from the Required-Start list, but that didn't work for posftix (even after running systemctl --system daemon-reload). How can I fix all this?

    Read the article

  • Keyring no longer prompts for password when SSH-ing

    - by Lie Ryan
    I remember that I used to be able to do ssh [email protected] and have a prompt asks me for a password to unlock the keyring for the whole GNOME session so subsequent ssh wouldn't need to enter the keyring password any longer (not quite sure if this is in Ubuntu or other distro). But nowadays doing ssh [email protected] would ask me, in the terminal, my keyring password every single time; which defeats the purpose of using SSH keys. I checked $ cat /etc/pam.d/lightdm | grep keyring auth optional pam_gnome_keyring.so session optional pam_gnome_keyring.so auto_start which looks fine, and $ pgrep keyring 1784 gnome-keyring-d so the keyring daemon is alive. I finally found that SSH_AUTH_SOCK variable (and GNOME_KEYRING_CONTROL and GPG_AGENT_INFO and GNOME_KEYRING_PID) are not being set properly. What is the proper way to set this variable and why aren't they being set in my environment (i.e. shouldn't they be set in default install)? I guess I can set it in .bashrc, but then the variables would only be defined in bash session, while that is fine for ssh, I believe the other environment variables are necessary for GUI apps to use keyring.

    Read the article

  • I noticed the answer to the Aug 2009 question of using VBA to set Custom Flag values

    - by Gary
    We are actively using the Follow Up mail feature to assign users to particular mail follow up items. Although mail item follow ups are generally the province of the individual user, by changing the flag value from "Follow Up" to "User name", we can see each other's follow up tasks and group on responsible party. Currently we accomplish the Flag value change manually. My sense is the easiest way to perfect the process is a separate macro for each user. Along the lines of the macro to change the star/end due date to end of month, please define the macro to re-assign the Flag value for each highlighted mail item (previously assigned by default). Thank you very much.

    Read the article

  • Why values in my WCF data contract were suddenly wrong...

    - by mipsen
    A WCF Service I provided took a very simple data contract as parameter (containing one string and one int...) and had a very simple task to do. A .NET 3.5 client was created using the VS2008 feature "Add Service Reference". Everything worked as expected. Then a slight change came in: The client was expected to run on machines with .NET 2.0 only. So we set the Target  Framework to .NET 2.0, removed the references to System.ServiceModel, System.Runtime.Serialization and the ServiceReference and created a new Reference to the Service using the old "Add Web Reference" . A matter of 2 minutes.  When testing, the int value in the data contract arriving at the WCF Service suddenly was 0, instead of 38 as we expected. What happened? When generating an old  Web Reference on a WCF data contract an additional boolean field for each value-type field is created called [Fieldname]Specified (e.g. AgeSpecified) which defaults to "false". WCF inspects these boolean fields to determine if a value was provided for the value-type field. If the "Specified"-field is "false", WCF translates that to using the default-value of the value-type field. For int this is 0. So we had to insert  setting the "Specified"-field  for the int-value to "true" and everything was fine again. That was what we forgot after setting the Framework-version to 2.0...

    Read the article

  • How to create a separated network to customers only?

    - by Valter Henrique
    I work in a company where we have a network ethernet and wi-fi, we would like to create a network where our customers could access our network but don't have access to our computer network. This access would be internet only, nothing more. The customers will not see our computers and the files that we share in our network. I have two routers, how can I do this ? A Cisco Linksys Wireless-N Broadband Router WRT160N V3 and a Netgear Wireless G Router WGR614 v9 and about firewalls there´s only windows firewalls in each computer by default.

    Read the article

  • OS X Terminal ends session at opening : permission denied

    - by Jon
    I have an issue with Terminal on MacOS X 10.7.4. I know where it comes from, but I don't know how to solve it : Yesterday, I installed fish-shell as a shell replacement. Following the installation instructions, I ended typing the following command : chsh -s /usr/local/bin/fish I noticed I had to do a : sudo bash for it to work. Once done, I quit. Today, I try to run Terminal and I see te following message : Last login: Wed Jun 27 12:38:01 on ttys000 login: /usr/local/bin/fish: Permission denied [Opération terminée] (yes, I'm French, which explains my poor English). I cannot type any command since I have no access to the Terminal. I tried with iTerm2 but same issue. No command is set at launch in the default profile of Terminal/iTerm2 (well, in the UI). How can I take the power back ? Thank you.

    Read the article

< Previous Page | 751 752 753 754 755 756 757 758 759 760 761 762  | Next Page >