Search Results

Search found 15241 results on 610 pages for 'solaris operating environment'.

Page 508/610 | < Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >

  • Strange ssh login

    - by Hikaru
    I am running debian server and i have received a strange email warning about ssh login It says, that user mail logged in using ssh from remote address: Environment info: USER=mail SSH_CLIENT=92.46.127.173 40814 22 MAIL=/var/mail/mail HOME=/var/mail SSH_TTY=/dev/pts/7 LOGNAME=mail TERM=xterm PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games LANG=en_US.UTF-8 SHELL=/bin/sh KRB5CCNAME=FILE:/tmp/krb5cc_8 PWD=/var/mail SSH_CONNECTION=92.46.127.173 40814 my-ip-here 22 I looked in /etc/shadow and find out, that password for is not set mail:*:15316:0:99999:7::: I found this lines for login in auth.log n 3 02:57:09 gw sshd[2090]: pam_winbind(sshd:auth): getting password (0x00000388) Jun 3 02:57:09 gw sshd[2090]: pam_winbind(sshd:auth): pam_get_item returned a password Jun 3 02:57:09 gw sshd[2091]: pam_winbind(sshd:auth): user 'mail' granted access Jun 3 02:57:09 gw sshd[2091]: Accepted password for mail from 92.46.127.173 port 45194 ssh2 Jun 3 02:57:09 gw sshd[2091]: pam_unix(sshd:session): session opened for user mail by (uid=0) Jun 3 02:57:10 gw CRON[2051]: pam_unix(cron:session): session closed for user root and lots of auth failures for this user. There is no lines with COMMAND string for this user. Nothing was found with "rkhunter" and with "ps aux" process inspection, also there is no suspicious connections was found with "netstat" (as I can see) Can anyone tell me how it is possible and what else should be done? Thanks in advance.

    Read the article

  • Finding the current user authenticated by basic auth (Apache)

    - by jtd
    When you log in through a basic auth page, is the username you authenticated as stored anywhere (on the server or client machine), maybe in an environment variable? Background: I have a common web administration page for an e-mail server and I'd like to know who is doing what. When a user successfully logs in via basic auth, I somehow want to be able to identify them and log their actions. So each time a request is submitted, I can write to a log file. The basic format would be: $username ran a $function against $useraccount so if a user changed someone's permissions, eg: Admin-Bob ran a permission change against User-Scott So if errors occur, I can easily trace back in the log file what actions lead to the cause. I tried checking the %ENV hash to no avail, any Ideas? I don't really want to get into PHP-like sessions, because that would mean scrapping my basic auth, which gives me a fine degree of control already. If I have to code something with sessions, I'd need to implement a system to block users after maximum tries and so on, which I don't really want to code. I think this is better geared towards serverfault because it pertains to Apache moreso than the programming language. Sessions can be done in a myriad of languages.

    Read the article

  • windows: force user to use specific network adapter

    - by Chad
    I'm looking for a configuration/hack to force a particular application or all traffic from a particular user to use a specific NIC. I have an legacy client/server app that has a "security feature" that limits connections based on IP address. I'm trying to find a way to migrate this app to a terminal server environment. The simple solution is for the development team to update the code in the application, however in this case that's not an option. I was thinking I might be able to install VMware NIC's installed for each user on the terminal server and do some type of scripting to force that user account to use a specific NIC. Anybody have any ideas on this? EDIT 1: I think I have a hack to work around my specific problem, however I'd love to hear of a more elegant solution. I got lucky in that the software reads the server IP address out of a config file. So I'm going to have to make a config file for each user and make a customer programs files for each user. Then add a VMware NIC for each user and make each server IP address reside on a different subnet. That will force the traffic for a particular user to a particular IP address, however its really messy and all the VM NIC's will slow down the terminal server. I'll setup a proof of concept Monday and let the group know how it affects performance.

    Read the article

  • Easiest way to do host name resolution with IPA?

    - by Luke
    We are currently using static LAN IP addresses for our internal non-public facing servers. We don't have DHCP configured. We're using Vyatta for our router and firewall. The firewall is configured to be zone based. We want to setup IPA for centralized authentication (LDAP+Kerberos). IPA is requiring resolvable host names. I want to avoid having to enter DNS records by hand. What is the most painless way to make host names resolvable that works with IPA in a Linux only environment? We arn't using anything to resolve host names now. Up until now we've been using static ip addresses and local users on each server. We've looked at BIND, DHCP (does that even solve the problem?), and multicast DNS. At this point we're not sure which solution would work best. Is there another option we haven't considered? Security is very important. We have multiple zones where each zone has very specific or no access to another zone. DNS for public domains is forwarded from Vyatta to our ISP's DNS server.

    Read the article

  • Same native and tagged vlan possible on Redhat?

    - by Chris Phillips
    Hi guys and gals, I'm looking at implementing a systems using a number of tagged and a native vlan connected to a server over a a/p bonded interface. The untagged vlan is for physical machine access, the tagged vlans are connected to bridges and then to QEMU VM's inside the machine. Hopefully this plan is fine, but I'm trying to implement a crippled version of this in a dev environment due to a lack of underlying network config in this location where I just have the same single vlan delivered to the machine on a tag AND plain. I'm nto clear if this is going to work (and that I should just be confident that it will work using different vlans) as I'm seeing odd things like a vm is arping out over the vlan out to the core switch, but the arp reply is coming back on the untagged interface. Now an ARP reply is unicast right? So it's a deliberate thing to send the ARP response on the untagged interface, and not a case that a broadcast response isn't being passed on the tagged side... i.e. there's some underlying logic pushing it that way. Something about the MACs somehow? This is on a CentOS 5.5 machine, vlan's from vconfig. (I've seen reference to the Linux mac-vlan project work, but that's not available here by default.) so 1) Should having the SAME vlan tagged and untagged work? 2) Will different tagged vlans to the untagged interface work nice and easily?

    Read the article

  • suPHP not working

    - by amarc
    OS: Ubuntu 10.04 etc/suphp/suphp.conf: [global] ;Path to logfile logfile=/var/log/suphp/suphp.log ;Loglevel loglevel=info ;User Apache is running as webserver_user=www-data ;Path all scripts have to be in docroot=/home ;Path to chroot() to before executing script ;chroot=/mychroot ; Security options allow_file_group_writeable=false allow_file_others_writeable=false allow_directory_group_writeable=false allow_directory_others_writeable=false ;Check wheter script is within DOCUMENT_ROOT check_vhost_docroot=true ;Send minor error messages to browser errors_to_browser=false ;PATH environment variable env_path=/bin:/usr/bin ;Umask to set, specify in octal notation umask=0077 ; Minimum UID min_uid=100 ; Minimum GID min_gid=100 [handlers] ;Handler for php-scripts application/x-httpd-suphp="php:/usr/bin/php-cgi" ;Handler for CGI-scripts x-suphp-cgi="execute:!self" some vhost in sites-enabled: NameVirtualHost *:8080 <VirtualHost *:8080> ServerAdmin ... ServerName ... ServerAlias ... AddType application/x-httpd-php .php AddHandler application/x-httpd-php .php suPHP_Engine on suPHP_UserGroup user user suPHP_ConfigPath "/home/user/etc" suPHP_PHPPath /usr/bin DocumentRoot /home/user/web/site.com/ ErrorLog /var/log/apache2/site.com-error_log CustomLog /var/log/apache2/site.com-access_log common <Directory /home/user/web/site.com/> Order Deny,Allow Allow from all Options +Indexes </Directory> </VirtualHost> But when I did nano /home/user/web/id.php and paste <?php system('id'); ?> in it, result I get is: uid=33(www-data) gid=33(www-data) groups=33(www-data) Have no idea what to do so I was hoping comunity could help ty.

    Read the article

  • Small maximum number of connections on a Linux router

    - by Eugene
    I have a Linux box acting as a router with no iptables or other firewall and no networking applications running on it, just pure router. I've put it in a test environment that generates many TCP connections, each having unique source and destination IP, and those connections go through this router. I'm observing that number of connections successfully created rise to approximately 500 and then no more connections can be created for several minutes, then another 100 connections can be created and there is another pause, and so on. If 10 connections for each source-destination pair are created, then maximum numbers go about 10 times up, so the problem is probably with many connections from different IPs. As traffic is simply routed, it doesn't have to do with number of file descriptors, iptables connection tracking and other things often proposed to check in similar cases. The box has plenty of free RAM and CPU, both NICs are gigabit. The kernel is 2.6.32. I've already tried increasing net.core.*mem_max, net.core.netdev_max_backlog and txqueuelen on both NICs, with completely no effect. What else should I check ? Is there some rate-limit in the kernel itself ?

    Read the article

  • Can I autoregister my servers hostname in my local DNS? [on hold]

    - by Christian Wattengård
    We have evaluated a W2k12 server as a domain controller at work. This has the extra benefit of registering every "subordinate" computers name in it's DNS so that I don't have to go around remembering IP's all the time. (And it let's me easily run dhcp also on my "pop-up" dev-servers). We need to rework our work network for several odd reasons, and in this new scenario there was no money for an extra Windows 2012 license. We have at our disposal several old boxes that run linux quite well. Is it possible to set up a DNS-server-"appliance" that somehow autoregisters it's own hostname.. Scenario: Router (N66u) on 172.20.20.1. Runs DHCP on 172.20.20.100-200 range. Server [verdant] of a *nix flavor on 172.20.20.2 Laptop [speedy] of W8 flavor on DHCP assigned Laptop [canary] of W8 flavor on DHCP assigned Desktop [lianyu] of Ubuntu flavor on DHCP assigned What I would like is that all of the above servers (except possibly the router) would be available on verdant.starling.lan and canary.starling.lan and so on. This is how it works right now (except the Ubuntu box... I haven't cracked that one yet) because Windows just does this for you.. I would also be able to do this without any manual labor on the server. When I tell my box it's name is smoak it should "immediately" be available as smoak.starling.lan without any extra configuration on my part. How can I do this in a Linux (Ubuntu) environment?

    Read the article

  • Why is my global security group being filtered out of my logon token?

    - by Jay Michaud
    While investigating the effects of filtered tokens on my file permissions, I noticed that one of my global security groups is being filtered in addition to the regular system-defined filtered groups. My Active Directory environment is a single-domain forest on the Windows Server 2003 functional level. I'll call the domain "mydomain.example.com". I am logged onto a Windows Server 2008 Enterprise Edition machine (not a domain controller) as a member of the "MYDOMAIN\Domain Admins" group and the "MYDOMAIN\MySecurityGroup" global security group (among others). When I run "whoami /groups" from an elevated command prompt, I see the full list of groups to which my account belongs as expected. When I run "whoami /groups" from a regular, non-elevated command prompt, I see the same list of groups, but the following groups are described as "Group used for deny only". BUILTIN\Administrators MYDOMAIN\Schema Admins MYDOMAIN\Offer Remote Assistance Helpers MYDOMAIN\MySecurityGroup Numbers 1 through 3 above are expected based on Microsoft documentation; number 4 is not. The "MYDOMAIN\MySecurityGroup" global security group is a group that I created. It contains three non-built-in global security groups, and these security groups contain only non-built-in user accounts. (That is, I created all of the accounts and groups that are members of the "MYDOMAIN\MySecurityGroup" global security group.) There are other, similar groups of which my account is a member that are not being filtered out of my logon token, and this group is not granted any specific user rights in the security settings of this computer or in Group Policy. What would cause this one group to be filtered out of my logon token?

    Read the article

  • Is Ubuntu a bad distro for a standalone mysql database server?

    - by DhruvPathak
    I read an article here : http://www.mysqlperformanceblog.com/2011/12/08/which-linux-distribution-for-mysql-server/ On the other end there are Debian and Ubuntu. Both use tool called dpkg for package management. There isn’t a month that I log in to a system based on either distribution where there are no issues with packages consistency. Unfinished installations, unresolved conflicts are so common that it’s just beyond simple negligence. The packaging system is just not robust enough. Another problem is that one broken package may block you from installing or uninstalling anything else. Imagine that someone left system in such shape, you prepared for downtime, stopped MySQL and… error – text editor has not been properly installed, so you cannot upgrade MySQL either until the problem is fixed. In a stressful situation when downtime clock ticks – annoying at best We prefer Ubuntu server because of familiarity and Ubuntu also being development environment. Questions: Is Ubuntu used commonly in production for a mysql database server ? Is it worth the trouble ever to have one distro eg Ubuntu in web server, and another say Red Hat in database server ? Or Is a homogenous server pool a better choice ?

    Read the article

  • Export files to remote server using TortoiseSVN

    - by Matt
    Hi, I'm using TortoiseSVN to keep revisions of my code. When I commit changes, I take note of what files have changed and upload them to my server using FTP. Here's my workflow: Edit files on local computer (eg. files in C:\Users\Me\web) Commit changes to local repository using rightclick- TortoiseSVN- SVN Commit. Take the files, open FileZilla (FTP client) and upload the files to a remote server. I was wondering if there was a way in which I could omit step 3 from my workflow. Basically I would like the changed files to be automatically uploaded to the remote server when I commit a version to the repository. Information about my computer environment: Windows 7 Ultimate x64 with TortoiseSVN x64 Notepad++ text editor Files edited are PHP, CSS, JS, HTML, etc. Server is running Linux with PHP 5.2 and MySQL. FileZilla is used to upload files. I can connect to the server via SSH if that is needed. Thank you in advance.

    Read the article

  • Postgresql Data Aggregation over WAN Securely

    - by Zach
    Hey guys, Need some advice on how to proceed with this situation: My current scenario is that I have several postgresql (50+) boxes deployed throughout various locations and data centers and a beefy postgresql box setup at a homebase location. All of the deployed boxes have identical database layouts. I'm looking for a solution that would allow for a few things. I realize some of these options overlap and some might only contain mutually exclusive solutions. However, I'm interested to hear your thoughts :) Remotely query the deployed boxes and pull the results back to the homebase box for processing. Nightly (remote) "sync" or dump the deployed boxes' databases to a master database on the homebase box. Remotely push a table entry to all of the deployed boxes from the homebase box. Ensure security of data in transit, and remotely deployed boxes. Up to this point I've been floating on a homebrew multithreaded python/perl system that SSH's into these boxes remotely, which are ACL'ed off to the homebase server and pulls (or pushes) the raw query results over the ssh connection. I have even touched #2 (remote syncing) as I know that would get nasty really quick. I'm interested in any ideas for a more elegant solution that can scale up and stick to my FreeBSD/Linux environment.

    Read the article

  • Logging Remote Server Access via Remote Desktop

    - by Nate Bross
    The objective here is to start a simple .NET application I've written which captures some environment variables (time, username, computername, etc) upon login. This .NET application subscribes to the Windows "User logout" event. Upon launch, the application captures the above variables, and creates a record in my database, upon logout (which I'm capturing) I update another field in the same record, with the logout time. The above is working exactly as I would like, when I launch the binary, it makes its initial log entry, then waits for the logout event and updates the same record. Restrictions, the .NET binary should be able to live on a share point (\server\share\myapp\v1) so I can update the application to (\server\share\myapp\v2) and simply update the GPO/Logon script. My initial thought was to use the \domaincontroller\sysvol\ directory to store the binary and then update all user accounts to include a call to my application. Can you see any flaws in this approach? My question is this: First, is there anything wrong with my idea above? Second, if so, what is the best way (through group policy or otherwise) to ensure this application launches whenever a session is started on a server?

    Read the article

  • troubleshooting really slow login on a (linux) machine

    - by Peeter Joot
    Within the last couple of weeks, any attempt to login to a specific linux server has gotten really slow. Once I've logged in, things appear to run without significant delay, but some other login like activities (like starting a new screen session) are slow. The machine's been rebooted a couple of times recently and that hasn't helped. , and it doesn't appear to be $PATH search (where $PATH can sometimes include bad NFS mounts), which I've seen historically in our environment. I've also tried completely removing my .profile/.bash*/... type of init files to rule out anything bad there. I also see slow login for at least one other userid on the system. One thing I've noticed is the following message when trying to exit from a screen terminal: Utmp slot not found -> not removed and am wondering if this is related (having a vague recollection that Utmp has something to do with login). Any idea what that message means, or how to fix it, and if it would be related? Failing that, what sort of problem determination tools are available to investigate what is slowing down this login process?

    Read the article

  • Mounting fuse sshfs fails when invoked by Cron on FreeBSD 9.0

    - by Tal
    I have a remote server filesystem that I'm attempting to mount locally on a FreeBSD 9 machine via FUSE sshfs, and Cron for a backup routine. I have ssh keys between the boxes setup to allow for passwordless login as the root user on the local machine. Cron is set to run the following script (in Root's crontab): #!/bin/sh echo "Mounting Share" /usr/local/bin/sshfs -C -o reconnect -o idmap=user -o workaround=all <remote user>@<remote domain>.com: /mnt/remote_server As root, I can run this script on the command line without issue, and without being asked for a password the share mounts successfully. Yet, when run by Cron the script fails. The path to sshfs is identical to the value of which sshfs Here is the email root receives from the Cron Daemon: X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Mounting Share fuse: failed to exec mount program: No such file or directory fuse: failed to mount file system: No such file or directory I'm stumped as to why I'm receiving No such file or directory in this instance. It further seems odd given that the paths appear to be correct. I've also attempted to compare the output of env on the shell with env inserted into the script. I don't see any environment variables that should cause this trouble. At bootup, FUSE reports its version as: fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 Help me ServerFault wizards, you're my only hope!

    Read the article

  • How to inactive Active Directory users, 1 month after their FIRST LOGIN, instead of defining a solid expiration date

    - by smhnaji
    We want to give access to some Active Directory users, so they can remotely have access to our server and download from a special folder of the server. The licenses we give to users, are time base. There should be 1 month, 2 month, ..., 1 year, ... licenses. CURRENT SITUATION (WHAT I DON'T WANT): When users are created and added to the OS, a solid expiration date is given. WHAT I WANT: Users' expiration date should be calculated automatically after the first login. The user might not need his account right when purchases the license. In other words: When a license of the user we create is purchased at Jan 1st, he should use the license until Feb 1st. No matter whether he really logs in or not. He cannot come Feb 5th and begin using his license because that has expired then. What I want is that when he comes at Feb 5th and begins using, the license update until March 5th. Working environment is Windows Server 2012. By the word 'user', I mean Active Directory Users.

    Read the article

  • Is there a way to log commands that a user runs in Windows 7?

    - by camster342
    I manage a large enterprise environment, and while we try to advise users not to, there are inevitably users that need to have local admin access to their machines. The problem is that some of these users like to "fiddle" and sometimes screw up their machines in "wonderful" ways. Is there an easy way to log what a user does on a machine, specifically in the command prompt? Maybe there is 3rd party tools I could use to log this information? With Linux that I used to use in past ages, you could look at a users bash history file to see what commands they have run. While I realise that specific log could also be altered by the user if they wanted to cover their tracks, that is the sort of log I'm looking for. If there are other ways I can also log what other system configuration type changes they make as well (not necessarily command line based), that's also useful. I know about event/system logs and so on, but that doesn't necessarily catch all the information I need to figure out how the user has buggered their machine this time.

    Read the article

  • Export files to remote server using TortoiseSVN

    - by Matt
    I'm using TortoiseSVN to keep revisions of my code. When I commit changes, I take note of what files have changed and upload them to my server using FTP. Here's my workflow: Edit files on local computer (eg. files in C:\Users\Me\web) Commit changes to local repository using rightclick- TortoiseSVN- SVN Commit. Take the files, open FileZilla (FTP client) and upload the files to a remote server. I was wondering if there was a way in which I could omit step 3 from my workflow. Basically I would like the changed files to be automatically uploaded to the remote server when I commit a version to the repository. Information about my computer environment: Windows 7 Ultimate x64 with TortoiseSVN x64 Notepad++ text editor Files edited are PHP, CSS, JS, HTML, etc. Server is running Linux with PHP 5.2 and MySQL. FileZilla is used to upload files. I can connect to the server via SSH if that is needed. Thank you in advance.

    Read the article

  • Mercurial hook fails on Windows

    - by Nick Hodges
    I am trying to use the headcount hook (https://bitbucket.org/dgc/headcount/overview) with my main develop repository. I pulled the code and placed it in C:\Python26\Lib\site-packages. I made the following entries into my hgrc file: [hooks] pretxnchangegroup.headcount = python:headcount.headcount.hook [headcount] push_ok = * commit_ok = * warnmsg = %(headcount)d new heads detected. You may not push new heads to this repository. debug = False All this is as per the install instructions. I then cloned the repository, created a branch, committed a change to that branch, and then issued: hg push -f as a test. However, this fails with: C:\junk\htmlwriter>hg push -f pushing to c:\code\htmlwriter searching for changes adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files transaction abort! rollback completed abort: pretxnchangegroup.headcount hook is invalid (import of "headcount.headcou nt" failed) I then ran this: C:\Python26>python c:\Python26\Lib\site-packages\headcount\headcount.py Traceback (most recent call last): File "c:\Python26\Lib\site-packages\headcount\headcount.py", line 2, in <modul e> import mercurial.node ImportError: No module named mercurial.node I'm far from a python expert, so can someone help me figure out how to get the headcount hook to run inside my mercurial environment? Details: Windows 7, Mercurial 1.7.2, TortoiseHg 1.1.7

    Read the article

  • Have a set a cgi scripts shared by multiple domains

    - by rpat
    Goal: Have multiple domains share a set of cgi(perl) scripts Environment: Apache 2.0 on a dedicated Cent OS server. (Apache configuration files generated by cPanel) I have dozens of domains on the dedicated server. The domains set up by cPanel under VirtualHost section. I have almost no knowledge of Apache. Most of what I do is taken care of by cPanel. I would like to put a set of scripts under one directory (perhaps under / or /opt ) and for each of the domains, under the individual cgi-bin, I would like to create a symbolic link to this common directory. This way I am hoping to avoid having to keep a copy of scripts for every domain. Since Apache config files are generated by cPanel, I would not like to manually make changes to those. Beside, I could mess things up. I see that cPanel recommends use of include files rather than changing the httpd.conf Perhaps I need to have the following of symbolic links enabled in the cgi-bin directory and allow the web server user execute the scripts not owned by it. May be I am making things more complicated than they are. I would be glad to use any other means to achieve my goal. Thanks in advance for your help. *I asked this on stackoverflow and some one suggested that I could ask this on serverfault.

    Read the article

  • NAS for Mac OS X Server

    - by SamAdmin
    I'm using Mac OS X Server and want to allow the users that connect to their network accounts to store their data on a NAS drive. I want the users to connect to the Lion server as this allows for better policies and management for me and for their afp share to be located on a NAS drive. I've looked into home directories and network logins however I don't want the users to connect into a different login environment, just an authentication against their provided account on the Lion server and for their finder to take them to their own storage area - located on the NAS drive. Currently I am using FreeNAS for both authentication and storage however there are getting to be far too many people to manage each afp share and account, plus just using FreeNAS is extremely limiting for expansion and if something goes wrong with 1 entity the entire system goes down. Using the Lion server for user accounts and policies will be much better for this expanding business. I have looked into LDAP, using the Lion server as an LDAP server to authenticate against for FreeNAS however I have had issues with this and thought a different approach could be better from the other side of the situation... Providing the account with somewhere to store data rather than the afp share authenticating against an LDAP server. I am wrong to try it this way? Is it possible to logically add storage to a Mac OS X Server which can be recognised as a local drive, so can be used for network accounts?

    Read the article

  • Office 2010 Trust Center settings: How to enable data connections in the "old" way?

    - by GSerg
    We're planning an upgrade Office 2003 - 2010 and have identified a big problem. In Office 2003, if the workbook you're opening contains a query table that fetches data from a data source automatically (upon file open or in certain intervals), then a security dialog pops up - whether you want to allow that. If you say Yes, the queries will refresh automatically when they need to. If you say No, the queries will not refresh automatically, neither on file open nor on time intervals, but you will be able to refresh any of them manually at any time by right-clicking and selecting Refresh. There is also a registry parameter to say, Don't display that dialog, just allow the queries. This is exactly what we want. On users' computers we have the registry parameter applied, so the users never see any dialogs. On developers' computers the parameter is not applied, so every time a file is opened the developer decides whether to allow the auto-refreshing for the current session. Usually the answer is No, because for developing, it is essential to not have quieres refresh when they want to, but instead, refresh them when the developer wants. The problem is that in Office 2010 which we are testing we can't find a way to achieve this functionality: The allow/disallow messages are now grouped into one yellow button, that either allows everything or disallows everything (including, say, macros, if macro security is set to "Disable, but ask"). If you don't click the yellow Allow button, the queries are disabled completely, not just for automatic execution. You cannot right-click and refresh a particular query -- doing that would summon a security dialog prompting for enabling queries, and if you say Yes, all queries in the document will be enabled for auto-execution and will start executing immediately. This sort of ruins our development environment. Is there a way to get the trust thingies in Office 2010 to work in the same way as before? Is there a yet another registry parameter to say, Prompt for auto-refresh, but allow manual refresh even when auto-refresh is disabled?

    Read the article

  • Zabbix doesn't update value from file neither with log[] nor with vfs.file.regexp[] item

    - by tymik
    I am using Zabbix 2.2. I have a very specific environment, where I have to generate desired data to file via script, then upload that file to ftp from host and download it to Zabbix server from ftp. After file is downloaded, I check it with log[] and vfs.file.regexp[] items. I use these items as below: log[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,"all",\1] vfs.file.regexp[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,,\1] The line I am parsing looks like below: C: 8195Mb 5879Mb 2316Mb 28.2 The value I want to extract is 28.2 at the end of file. The problem I am currently trying to solve is that when I update the file (upload from host to ftp, then download from ftp to Zabbix server), the value does not update. I was trying only log[] at start, but I suspect, that log[] treat the file as real log file and doesn't check the same lines (althought, following the documentation, it should with "all" value), so I added vfs.file.regexp[] item too. The log[] has received a value in past, but it doesn't update. The vfs.file.regexp[] hasn't received any value so far. file.txt has got reuploaded and redownloaded several times and situation doesn't change. It seems that log[] reads only new lines in the file, it doesn't check lines already caught if there are any changes. The zabbix_agentd.log file doesn't report any problem with access to file, nor with regexp construction (it did report "unsupported" for log[] key, when I had something set up wrong). I use debug logging level for agent - I haven't found any interesting info about that problem. I have no idea what I might be doing wrong or what I do not know about how Zabbix is performing these checks. I see 2 solutions for that: adding more lines to the file instead of making new one or making new files and check them with logrt[], but those doesn't satisfy my desires. Any help is greatly appreciated. Of course I will provide additional information, if requested - for now I don't know what else might be useful.

    Read the article

  • Single Sign On 802.1x Wireless - saying “Connecting to <SSID>”, hangs for 10 seconds, fails with “Unable to connect to <SSID>, Logging on…”.

    - by Phaedrus
    We are implementing WiFi on Windows 7 machines in our corporate environment. Machines should be able to log into the domain by WiFi as the Machine (Pre-Logon), and as the User (Post-Logon). We have everything working correctly except for 2 things: 1) Sometimes the login scripts don't run 2) The user VLAN is sometimes different than the machine vlan, and no DHCP renew occurs after user logon. I am clear that both these problems should be fixable by using the "Single Sign On" Option under the 802.1x Wireless Vista GPO, and setting the wireless to connect immediately before user logon and also by enabling "This network uses different VLAN for authentication with machine and user credentials" If I enable these GPO settings in a lab, the computer does authenticate & gets WIFI before the user logs on, so when the login box is displayed, it says “Windows will try to connect to ”, even though it is already connected (which should be ok?). Enter the user credentials and it goes to a screen saying “Connecting to ”, hangs for 10 seconds, fails with “Unable to connect to , Logging on…”. Desktop fires up and then the user re-authenticates with no problem as himself instead of the machine, but by that point, we defeat the point of the WiFi SSO “before user logon”. Also by that point, no DHCP renew seems to occur, and the user is still stuck with the wrong IP address for the new VLAN. When the “Connecting to ” screen comes up, there’s no indication on the AP or the Radius server that anything whatsoever is happening after credentials are entered until after the domain logon. Also with this policy enabled, sometimes windows hangs on a black screen indefinitely until I disable the Wireless NIC, so something is knackered for sure. What have I missed? Suggestions are much appreciated... /P

    Read the article

  • virtual host settings fail on multiple sites

    - by Ricalsin
    Wow. I'm puzzled. On my ubuntu system I've setup an apache2 server and configured three virtual hosts in the /etc/apache2/sites-available directory. a2ensite to symbolic link the sites-enabled. The first two work great; a simple url of localhost.mysitenames.com works great for the first two sites, both finding their DocumentRoot and Directory paths. The third always generates a Bad Request (Invalid Hostname) response. No server error.log as it never hits it. I've copied/pasted the working vhost files, made the minor changes to the ServerName, DocumentRoot and Directory and the same problem persists. I always "sudo /etc/init.d/apache2 restart" whenever I make a change. I've cleared the browser cache as well. No love. There's not a limit to the number of sites you can host, right? My goal was a localhost development environment with the expectation I can run any number of websites locally before pushing them to a live server. Any thoughts on how to debug this? Or, just a simple solution I am missing?

    Read the article

< Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >