Search Results

Search found 30486 results on 1220 pages for 'network level auth'.

Page 337/1220 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • Puppet: is it ok to "force" certname when you expect to shuffle nodes around?

    - by Luke404
    We all know (good example on SF) that Puppet hostname detection could be... fun. At our company (and I guess we're not alone at this) we usually pre-configure servers at our offices and test them before bringing the gear to a remote datacenter and rack them. Of course the reverse dns will change when doing that, even if we don't change the actual hostname of the system. We're slowly drafting our puppet setup and I'd like to be sure those moves won't create problems. My idea is to explicitly configure the desired full FQDN of the system as certname in puppet.conf at server provision time (before the very first puppet run). My process would look something like this: basic o.s. installation basic network configuration, enough to reach the internet and resolve dns install puppet and set up certname start puppet and let him manage the whole configuration test, fix problems in config (via puppet), re-test, and so on... manually stop puppet set up new network configuration for the datacenter network move the machine to DC turn it on puppet should automatically start and keep on doing its job The process is supported by detecting the environment in puppet's manifests (eg. based on subnet, like they do at Wikimedia) and modify configuration as needed (eg. resolv.conf contents appropriate for each network). Each node's certname will never change for the whole system life cycle. Is there any problem with this approach? Could it be improved?

    Read the article

  • Ethernet cable unplugged after updating from windows 8.0 to 8.1

    - by Pehmolelu
    Yeah, so I went and updated my windows 8 pro to 8.1. Now everything else seems to work but the network. The Ethernet just says that Network cable is unplugged, even though it is plugged and I have tried different cord as well and I have tested that the router works. I have tried uninstalling the network drivers (Realtec PCIe GBE) and reinstall them with no success. After installing drivers the Device Management gives error for it "Device could not be started. Code 10" Before 8.1 update I had rt630x64.inf, after update it was netrt630x64.inf, and after installing the latest driver rt630x64.inf. With rt630x64.inf there isn't any error, but it's still not just working. New downloaded version: 8.020.0815.2013 (From Realtek website) The driver before: 8.1.510.2013 (After updating the windows to 8.1) I'm using Desktop PC, no VM. I dont have VirtualBox or Vmware installed. I have checked from BIOS that the card is enabled. I have booted in safe mode with network enabled. Says unplugged there as well. I have put the power off for few minutes and put back on, no effect. If anyone has any kind of suggestions, please tell.

    Read the article

  • What is the risk of introducing non standard image machines to a corporate environment

    - by Troy Hunt
    I’m after some feedback from those in the managed desktop or network security space on the risks of introducing machines that are not built on a standard desktop image into a large corporate environment. This particular context relates to the standard corporate image (32 bit Win XP) in a large multi-national not being suitable for a particular segment of users. In short, I’m looking at what hurdles we might come across by proposing the introduction of machines which are built and maintained by a handful of software developers and not based on the corporate desktop image (proposing 64 bit Win 7). I suspect the barriers are primarily around virus definition updates, the rollout of service packs and patches and the compatibility of existing applications with the newer OS. In terms of viruses and software updates, if machines were using common virus protection software with automated updates and using Windows Update for service packs and patches, is there still a viable risk to the corporate environment? For that matter, are large corporate environments normally vulnerable to the introduction of a machine not based on a standard image? I’m trying to get my head around how real the risk of infection and other adverse events are from machines being plugged into the network. There are multiple scenarios outside of just the example above where this might happen (i.e. a vendor plugging in a machine for internet access during a presentation). Would a large corporate network normally be sufficiently hardened against such innocuous activity? I appreciate the theory as to why policies such as standard desktop images exist, I’m just interested in the actual, practical risk and how much a network should be protected by means other than what is managed on individual PCs.

    Read the article

  • What are possible results/side effects if replication between DC's in a Windows domain is unable to occur?

    - by hydroparadise
    There's plenty of administration literature out there how to properly manage Windows servers. But in dealing with real life, things don't always occur like you want them to. In Microsoft's Windows Server 2003 Administrator's Companion, out of 1400+ pages, theres only one page that I could find when it comes up setting up additional domain controlers. They make it sound seemless and don't reveal a whole lot on what happens if "peer" DC's are unable to replicate. Down to the specific issue at hand, we had a DC go down about a month ago due to a bad RAID controller. There was nothing critical that waranted imediate attention, so bringing it back up got put on the back burner. A month later, we get the DC back up and running and everyting seemed ok. The next day, nobody is able to logon complaining that the "user does not exist" or "unable to establish a trust relationship". Knowing that I had just put the downed DC back on the network, I immediately took it back off the network and had everybody restart the workstations. After that, exchange was fine, shares became available, and everybody was able to log in. After doing some event log swimming, it would appear that everything started due to replication issues on the SYSVOL. I've read where you can force replication, but that would mean putting it back on the network. I am afraid to put the DC back on the network in fear that something else could go wrong. So, what other issues could one expect to run into where two DC's are unreplicated for over a month?

    Read the article

  • function to org-sort by three (3) criteria: due date / priority / title

    - by lawlist
    Is anyone aware of an org-sort function / modification that can refile / organize a group of TODO so that it sorts them by three (3) criteria: first sort by due date, second sort by priority, and third sort by by title of the task? EDIT: If anyone can please help me to modify this so that undated TODO are sorted last, that would be greatly appreciated -- at the present time, undated TODO are not being sorted: ;; multiple sort (defun org-sort-multi (&rest sort-types) "Multiple sorts on a certain level of an outline tree, or plain list items. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Example: To sort first by TODO status, then by priority, then by date, then alphabetically (case-sensitive) use the following call: (org-sort-multi '(?d ?p ?t (t . ?a)))" (interactive) (dolist (x (nreverse sort-types)) (when (char-valid-p x) (setq x (cons nil x))) (condition-case nil (org-sort-entries (car x) (cdr x)) (error nil)))) ;; sort current level (defun lawlist-sort (&rest sort-types) "Sort the current org level. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Defaults to \"?o ?p\" which is sorted by TODO status, then by priority" (interactive) (when (equal mode-name "Org") (let ((sort-types (or sort-types (if (or (org-entry-get nil "TODO") (org-entry-get nil "PRIORITY")) '(?d ?t ?p) ;; date, time, priority '((nil . ?a)))))) (save-excursion (outline-up-heading 1) (let ((start (point)) end) (while (and (not (bobp)) (not (eobp)) (<= (point) start)) (condition-case nil (outline-forward-same-level 1) (error (outline-up-heading 1)))) (unless (> (point) start) (goto-char (point-max))) (setq end (point)) (goto-char start) (apply 'org-sort-multi sort-types) (goto-char end) (when (eobp) (forward-line -1)) (when (looking-at "^\\s-*$") ;; (delete-line) ) (goto-char start) ;; (dotimes (x ) (org-cycle)) )))))

    Read the article

  • MongoDB on EC2 - Creating a replicaset across DCs

    - by ankitb
    we are trying to get a MongoDB setup in EC2 going. I had a few questions - Should we turn on auth since the MongoDB endpoint will have a public VIP? Any big hit on perf with auth enabled? Best way to deploy a replicaset in EC2? Do I have to deploy all 3 nodes individually and configure them or can I use a tool to automate the deployment? We would like one of the secondaries to be located in a different DC than the primary. Ubuntu or RHEL? And what version? Thanks!

    Read the article

  • Router(s) Issue: DNS quries sporadically fail with multiple computers hooked in

    - by bob-the-destroyer
    Basically, after anywhere from 5-60 minutes, DNS queries fail for a few minutes, then slowly begin to resolve correctly. Then the cycle repeats. This occurs only when more than one computer is on the network. All computers on the network experiences the same sporadic DNS outage at the same time. Wireless or wired, Linux or Windows, fresh OS install or old, browser or ping, same symptoms. Duplicated on 3 routers (not chained together, mind you) and 3 ISP's and 3 separate locations over the past several months. The only common theme is a single 5-yo WIN XP laptop which has been in use on the network throughout all this. There also may be anywhere between 1 - 10 devices hooked up wired or wirelessly at a time. The only reprieve I have from this torture is by using any VPN to an outside source - always smooth sailing. I typically set up any router to a) use WPA2/etc security; b) MAC whitelist; c) UPNP OFF (if available); d) always update firmware when available; e) obtain DNS from ISP automatically; f) set the router to act as DHCP server for the internal network. Adjusting channels has no effect. Any ideas?

    Read the article

  • I have to manually change the DNS suffix order every time I connect to VPN. Can I change this permanently or fix the problem somehow?

    - by CarlB
    Sorry in advance but I'm a programmer, not a network engineer, so I'm a noob at this stuff. Anyway, when I am not connected to VPN from my work PC at home, I have the following DNS suffixes listed (real domain names substituted): enterprise.org network.org company.com us.enterprise.org After connecting to VPN, one more DNS suffix is added to the very top of the list: problem-domain.com At this point, most network functions that I can normally perform when actually connected to the LAN in the office are unusable. I get error messages about the network paths not being found and what-not. Anyway, I played around with the suffixes and realized that if I just moved problem-domain.com down one spot to the second in the list, all the problems went away. Unfortunately, it returns to the top spot every time I reconnect, and I tend to get disconnected frequently. Is there something else I can do about this or should I just contact the IT department? I've had this problem before and they weren't able to resolve it but I suppose it would be worth trying again if I could get a different person on the job. What I don't understand is that I thought it didn't matter what order the suffixes were in? Isn't Windows supposed to go through each suffix until it finds a match (or has gone through all the suffixes)? Why is it quitting after the first one? Thanks in advance.

    Read the article

  • Cannot access Domain Controller through VPN

    - by Markus
    In our small network there is a Windows 2008 R2 Domain Controller that also serves as Remote Access Server. For years, we could access this server and the resources in the network over a VPN connection without any problem. Since some time however, I am able to connect to the VPN, but my Windows 8 client (and another one I used for testing purposes) is not able to connect the domain controller afterwards. I can access any other server in the network, but there seems to be a problem regarding the trust between the client(s) and the server. If I connect the client to the network directly over a LAN cable, everything works as expected. Also I can connect to another server over VPN and open a RDP prompt to the DC without a problem. On the client, whenever I try to access the DC, I get an access denied message. I've tried to update the group policies both over VPN and LAN. Also, I've removed the client from the domain and re-added it. The client shows a message that Windows requires valid login information when connected to the VPN - but my credentials are valid. They work when I logon to the client when not connected to the VPN and also when connected to the LAN. Turning off the firewall on the client and the server did not change anything. DNS resolution works both on the server and the client. What else can I do to diagnose and solve the problem?

    Read the article

  • Wireless Internet Connection Sharing in Ubuntu

    - by klutch2
    As the title states, I need to share a wireless connection with a laptop running Ubuntu as the AP. The setup will be as follows: Corporate WiFi <<== Laptop <<== Other Devices i.e. (iPad, iPhone) I want to be able to connect the "Other Devices" via WiFi to the laptop. I have thought of setting up an ad-hoc network by connecting to the Corporate WiFi and then setting up a new network and hoping the connection to both would stay, but that doesn't seem to work. If I set up the ad-hoc network by itself, I can see it from my "Other Devices". The reason I need this is because for some reason, my iPad and my iPhone will not connect to my corporate WiFi and I need to use them so I want to use my laptop to share the connection and act as an AP for my "Other Devices". My laptop is a Chrome CR-48 running Ubuntu and as some of you might know, it does not have an ethernet port, so having a wired connecting and then setting up a network is out of question. I want to connect to the Corporate WiFi and share that connection by having the laptop act as an AP for other devices.

    Read the article

  • Rsyslog : copy with change the facility

    - by Dom
    I have saslauthd with save the logs in LOG_AUTH in our rsyslogd server. It can't be changed without recompiling, and I don't want to do that. I would like to see all the LOG_AUTH in LOG_MAIL, because I do an export to an external machine, and I would like to see all the saslauthd logs in LOG_MAIL in the distant server. Of course, in local I can add "auth.* " in the mail.log file section, but the export will not be in the right file because I filter in export by syslog Facility/Priority. How can I export all the AUTH logs into MAIL logs ? Thanks

    Read the article

  • Can't connect to wi-fi hotspot in Ubuntu 11.10

    - by ht3t
    I'm new to Ubuntu. I'm having a wireless network problem in Ubuntu 11.10. I made a hotspot using Connectify from a computer which is running Windows 7. I can access it in Windows 7 but not in Ubuntu 11.10. Every time I access it,I get a message "disconnected". I'm using msi fx 400 notebook with Intel Centrino wireless -N 1000 wireless card. Ubuntu version is 11.10 with KDE desktop. $ sudo lshw -c network [sudo] password for ht3t: *-network description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:06:00.0 logical name: wlan0 version: 00 serial: 00:26:c7:56:b8:f0 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn driverversion=3.0.0-12-generic firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:44 memory:e7400000-e7401fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:07:00.0 logical name: eth0 version: 06 serial: 40:61:86:b6:b1:a2 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw IP=192.168.21.107 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:9000(size=256) memory:e6004000-e6004fff memory:e6000000-e6003fff I can't do anything without internet connection. How can I fix this?

    Read the article

  • Does cloud computing offer this? [closed]

    - by TheBlackBenzKid
    I have some newb questions I want answering please about cloud hosting - we are currently looking at Rackspace and getting a windows box. This is the situation: We have 15 computers in our office. We have 3 printers, some wifi and some network plugged. We have a standard router and the office share things via dropbox. The computers are not on Windows SBS or something similar. We want a cloud hosting solution that will offer User can login on any machine in the office and see the machine software User can login on any machine in the office and open Outlook and their emails and signature will be on exchange automatically A shared company folder on the network All printers automatically installed on the network Users can login remotely to access emails via the web At the moment we have a network company saying we need Xeon server in house with backup and psu and Windows SBS with license for each machine and also we need cabinets and cabling setup and also load balancers and modification of our DNS for emails. My question is this. Can cloud offer this? Can we have a server in the cloud that does this? Is it possible I mean the computers would be wireless connected to this cloud and you turn the machine on and its hosted?

    Read the article

  • Run a MySQL server in a self-contained folder

    - by codersarepeople
    I've seen many questions about how to run multiple sql servers on one server, but I would like to run mysqld as a user-level process and completely self contained in a folder (I have no permissions outside my user folder). I spent some time using the --defaults-file=my.cnf, but it still seems to conflict with the system-level mysql server that's running. Does anybody know how to do this? Thanks in advance!

    Read the article

  • subversion issue on mac os x

    - by user32942
    This exists in my httpd.conf file: <Location /svn> DAV svn SVNParentPath /Users/iirp/Sites/svn Allow from all #AuthType Basic #AuthName "Subversion repository" #AuthUserFile /Users/iirp/Sites/svn-auth-file #Require valid-user </Location> This is working file When I change this to: <Location /svn> DAV svn SVNParentPath /Users/iirp/Sites/svn #Allow from all AuthType Basic AuthName "Subversion repository" AuthUserFile /Users/iirp/Sites/svn-auth-file Require valid-user </Location> and when I access my repository through URL, it gives me the authentication screen but after that screen my svn repository is not showing up correctly. to see message that it gives to me is: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log.

    Read the article

  • Safely transfer files from pc with internet connection to lan without allowing any other form of communication

    - by Hugh Quenneville
    In the company that I work there are computers that are connected to the Internet and computers that are connected to a Local Area Network. The LAN is considered a "safe zone" and the files that reside there should never be copied/moved to a computer that has Internet Access. So, now, if we want to download an installer for an application for example, we download it in a pc that has Internet Access and then move it using a "secure USB stick" to the Local Area Network. Is there a way to create an "safe, one-way connection" between a computer with Internet access and a computer from the LAN? This practically means that only files from the computer with the Internet access can be copied/moved to the LAN. In addition to that, if you want to transfer files you would have to provide your security credentials for the network (so, that only users with the appropriate access levels will be able to transfer files). Is it possible to create something like that and make it completely safe (or at least "equally safe" with the USB method that we currently use) or the fact that the computer with Internet access is connected with a wire to the LAN is a security risk by itself? NOTE: the LAN setup involves 2 Windows 2003 servers with Active Directory, Web servers and pretty much all the services that you would expect to find in a Windows network.

    Read the article

  • getting Error while set up the connection pool in jboss

    - by Yashwant Chavan
    Hi as per following Connection pool configuration facing some issue. Place a copy of mysql-connector-java-[version]-bin.jar in $JBOSS_HOME/server/all/lib. Then, follow the example configuration file named mysql-ds.xml in the $JBOSS_HOME/docs/examples/jca directory that comes with a JBoss binary installation. To activate your DataSource, place an xml file that follows the format of mysql-ds.xml in the deploy subdirectory in either $JBOSS_HOME/server/all, $JBOSS_HOME/server/default, or $JBOSS_HOME/server/[yourconfig] as appropriate. I am getting following error resource-ref: jdbc/buinessCaliberDb has no valid JNDI binding. Check the jboss-web/resource-ref. This is my mysql-ds.xml <datasources> <local-tx-datasource> <jndi-name>jdbc/buinessCaliberDb</jndi-name> <connection-url>jdbc:mysql:///BUSINESS</connection-url> <driver-class>com.mysql.jdbc.Driver</driver-class> <user-name>root</user-name> <password>password</password> <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.MySQLExceptionSorter</exception-sorter-class-name> <!-- should only be used on drivers after 3.22.1 with "ping" support <valid-connection-checker-class-name>org.jboss.resource.adapter.jdbc.vendor.MySQLValidConnectionChecker</valid-connection-checker-class-name> --> <!-- sql to call when connection is created <new-connection-sql>some arbitrary sql</new-connection-sql> --> <!-- sql to call on an existing pooled connection when it is obtained from pool - MySQLValidConnectionChecker is preferred for newer drivers <check-valid-connection-sql>some arbitrary sql</check-valid-connection-sql> --> <!-- corresponding type-mapping in the standardjbosscmp-jdbc.xml (optional) --> <metadata> <type-mapping>mySQL</type-mapping> </metadata> </local-tx-datasource> </datasources> and this my web.xml entry <resource-ref> <description>DB Connection</description> <res-ref-name>jdbc/buinessCaliberDb</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> and this jboss-web.xml entry <jboss-web> <resource-ref> <description>DB Connection</description> <res-ref-name>jdbc/buinessCaliberDb</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> </jboss-web> Please help

    Read the article

  • PHP setting cookies in a child class

    - by steve
    I am writing a custom session handler and for the life of me I cannot get a cookie to set in it. I'm not outputting anything to the browser before I set the cookie but it still doesn't work. Its killing me. The cookie will set if I set it in the script I define and call on the session handler with. If necessary I will post code. Any ideas people? <?php /* require the needed classes comment out what is not needed */ require_once("classes/sessionmanager.php"); require_once("classes/template.php"); require_once("classes/database.php"); $title=" "; //titlebar of the web browser $description=" "; $keywords=" "; //meta keywords $menutype="default"; //default or customer, customer is elevated $pagetitle="dflsfsf "; //title of the webpage $pagebody=" "; //body of the webpage $template=template::def_instance(); $database=database::def_instance(); $session=sessionmanager::def_instance(); $session->sessions(); session_start(); ?> and this is the one that actually sets the cookie for the session function write($session_id,$session_data) { $session_id = mysql_real_escape_string($session_id); $session_data = mysql_real_escape_string(serialize($session_data)); $expires = time() + 3600; $user_ip = $_SERVER['REMOTE_ADDR']; $bol = FALSE; $time = time(); $newsession = FALSE; $auth = FALSE; $query = "SELECT * FROM 'sessions' WHERE 'expires' > '$time'"; $sessions_result = $this->query($query); $newsession = $this->newsession_check($session_id,$sessions_result); while($sessions_array = mysql_fetch_array($sessions_result) AND $auth = FALSE) { $session_array = $this->strip($session_array); $auth = $this->auth_check($session_array,$session_id); } /* this is an authentic session. build queries and update it */ if($auth = TRUE AND $newsession = FALSE) { $session_data = mysql_real_escape_string($session_data); $update_query1 = "UPDATE 'sessions' SET 'user_ip' = '$user_ip' WHERE 'session_id' = '$session_id'"; $update_query2 = "UPDATE 'sessions' SET 'data' = '$session_data' WHERE 'session_id = '$session_id'"; $update_query3 = "UPDATE 'sessions' SET 'expires' = '$expires' WHERE 'session_id' = '$session_id'"; $this->query($update_query1); $this->query($update_query2); $this->query($update_query3); $bol = TRUE; } elseif($newsession = TRUE) { /* this is a new session, build and create it */ $random_number = $this->obtain_random(); $cookieval = hash("sha512",$random_number); setcookie("rndn",$cookieval); $query = "INSERT INTO sessions VALUES('$session_id','0','$user_ip','$random_number','$session_data','$expires')"; $this->query($query); //echo $cookieval."this is the cookie <<"; $bol = TRUE; } return $bol; }

    Read the article

  • What is a “pretty and proper OO” way for handling sessions and authentication?

    - by asdfqwer
    Is coupling these two concepts a bad approach? As of right now I'm delegating all session handling and whether or not a user desires to logout in my config.inc file. As I was writing my Auth class I started wondering whether or not my Auth class should be taking care of most of the logic in my config.inc. Regardless, I'm sure there's a more elegant way of handling this... Here is what I have in my config.inc (also a large chunk of this code is based on a reply I found on SO except I can't find the source ._.): ini_set('session.name', 'SID'); # session management session_set_cookie_params(24*60*60); // set SID cookie lifetime session_start(); if(isset($_SESSION['LOGOUT']) { session_destroy(); // destroy session data $_SESSION = array(); // destroy session data sanity check setcookie('SID', '', time() - 24*60*60); // destroy session cookie data #header('Location: '.DOCROOT); } elseif(isset($_SESSION['SID_AUTH'])) { // verify user has authenticated if (!isset($_SESSION['SID_CREATED'])) { $_SESSION['SID_CREATED'] = time(); } elseif (time() - $_SESSION['SID_CREATED'] > 6*60*60) { // session started more than 6 hours ago session_regenerate_id(); // reset SID value $_SESSION['SID_CREATED'] = time(); // update creation time } if (isset($_SESSION['SID_MODIFIED']) && (time() - $_SESSION['SID_MODIFIED'] > 12*60*60)) { // last request was more than 12 hours ago session_destroy(); // destroy session data $_SESSION = array(); // destroy session data sanity check setcookie('SID', '', time() - 24*60*60); // destroy session cookie data } $_SESSION['SID_MODIFIED'] = time(); // update last activity time stamp }

    Read the article

  • Logs are written to *.log.1 instead of *.log

    - by funkadelic
    For some reason my log files are writing to the *.log.1 files instead of the *.log files, e.g. for my Postfix log files it is writing to /var/log/mail.log.1 and not /var/log/mail.log as expected. Same goes for mail.err. It looks like it's also doing it for auth.log and syslog. Here is a ls -lt snippet of my /var/log directory, showing the more recently touched log files in reverse chronological order -rw-r----- 1 syslog adm 4608882 Dec 18 12:12 auth.log.1 -rw-r----- 1 syslog adm 4445258 Dec 18 12:12 syslog.1 -rw-r----- 1 syslog adm 2687708 Dec 18 12:11 mail.log.1 -rw-r----- 1 root adm 223033 Dec 18 12:04 denyhosts -rw-r--r-- 1 root root 56631 Dec 18 11:40 dpkg.log -rw-rw-r-- 1 root utmp 292584 Dec 18 11:39 lastlog -rw-rw-r-- 1 root utmp 9216 Dec 18 11:39 wtmp ... And ls -l mail.log*: -rw-r----- 1 syslog adm 0 Dec 16 06:31 mail.log -rw-r----- 1 syslog adm 2699809 Dec 18 12:28 mail.log.1 -rw-r----- 1 syslog adm 331704 Dec 9 06:45 mail.log.2.gz -rw-r----- 1 syslog adm 235751 Dec 2 06:40 mail.log.3.gz Is there something that is misconfigured? I tried restarting postfix and it still wrote to mail.log.1 afterwards (same with a postix stop; postfix start, too).

    Read the article

  • SO-overflow induced passivity - how to cope?

    - by Ruben
    After not really working on my pet project for a while, I discovered Stackoverflow and upon perusing it more intensely I was quite amazed. I'm a bit of a perfectionist, so when I found eye-openers here highlighting many of the mistakes I made, I first wanted to fix everything. However, it's a pet project for a reason: I'm self-taught and I'm studying psychology, so programming skills can never become priority one (though it often helps, even in this field). Issues that stuck out were numerous security issues (e.g. CSRF-prevention and bcrypt eluded me) not object-oriented (at least the PHP part, the JS-part mostly is) no PHP framework used, so many of my DIY takes on commonly-tackled components (auth, ...) are either bad or inefficient really poor MySQL usage (no prepared statements, mysql extension, heard about setting proper indices two days ago) using mootools even though JQuery seems to be fashionable, so there's more probably always going to be better integration with services I'd like to use (like google visualization) So, my SO-induced frenzy turned into passivity. I can't do it all (soon) in the rather small amount of spare time I can spend on working on my project. I can leave some of the issues be in good conscience (speed stuff: an unfinished & unpublished project will never become popular, right?). No clear conscience without good security though and if I don't use a framework for auth and other complex stuff I'll regret having to do it myself. One obvious answer would probably be going open-source, but I think the project would need to become more impressive before others would commit to it. I can't afford to employ someone either. I do think the project deserves being worked on, though. How should I tackle it anyway? What's the best practice for little-practice people?

    Read the article

  • .htaccess do not work without index.php on CodeIgniter

    - by Mattia
    I have read a lot of topic with the same problem but I do not find the solution. I have a LAMP into Ubuntu server. My document root is /home/utente/ into this dir I have another dir (turni) with a CodeIgniter web app. The web app works fine with the index.php into the URL, but I want to eliminate it. I have this configuration: config.php into CodeIgniter: $config['index_page'] = ''; .htaccess: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} ^system.* RewriteRule ^(.*)$ /index.php?/$1 [L] RewriteCond %{REQUEST_URI} ^application.* RewriteRule ^(.*)$ /index.php?/$1 [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] /etc/apache2/sites-available/default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/utente <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/utente/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> When I open a link of the web app without index.php into the URL, the server show me this error: The requested URL /turni/auth/login was not found on this server. Why? If I put the index.php like /turni/index.php/auth/login all works fine.

    Read the article

  • PowerBroker (Likewise-Open) + Ubuntu 13.04 -> 13.10 Upgrade

    - by JoBu1324
    I just upgraded Ubuntu from 13.04 to 13.10, and now I can't log into Active Directory; my system is integrated using PowerBroker Identity Services (PBIS), which used to be called Likewise-Open. So far I have identified the following symptoms: I am able to log in with my credentials via ssh. The screen goes black when attempting log into my account via the login screen. I've tried leaving the domain, purging PBIS, and re-installing the latest version of PBIS. I've been trying the troubleshooting section I found here, but I haven't had any success. The relevant portion of the auth.log Oct 22 09:30:26 mypc lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "myusername" Oct 22 09:30:29 mypc lightdm: pam_unix(lightdm-greeter:session): session closed for user lightdm Oct 22 09:30:29 mypc lightdm: pam_unix(lightdm:session): session opened for user myusername by (uid=0) Oct 22 09:30:29 mypc lightdm: pam_unix(lightdm:session): session closed for user myusername Oct 22 09:30:30 mypc lightdm: pam_unix(lightdm-greeter:session): session opened for user lightdm by (uid=0) Oct 22 09:30:30 mypc systemd-logind[718]: New session c5 of user lightdm. Oct 22 09:30:30 mypc lightdm: pam_ck_connector(lightdm-greeter:session): nox11 mode, ignoring PAM_TTY :1 Oct 22 09:30:31 mypc dbus[535]: [system] Rejected send message, 2 matched rules; type="method_call", sender=":1.129" (uid=110 pid=5139 comm="/usr/lib/x86_64-linux-gnu/indicator-keyboard-servi") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination=":1.39" (uid=0 pid=2024 comm="/usr/sbin/console-kit-daemon --no-daemon ") My .xsession-errors log Script for ibus started at run_im. Script for auto started at run_im. Script for default started at run_im. /usr/sbin/lightdm-session: 5: exec: init: not found

    Read the article

  • Unable to access any ubuntu shares from android/windows clients

    - by dan
    I am running Ubuntu 11.04, and cant seem to access any of my shares. Here is the output from testparm-s : Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Processing section "[printers]" Processing section "[CanonMG2100AIO]" Processing section "[FreeAgent Drive]" Loaded services file OK. WARNING: You have some share names that are longer than 12 characters. These may not be accessible to some older clients. (Eg. Windows9x, WindowsMe, and smbclient prior to Samba 3.0.) Server role: ROLE_STANDALONE [global] server string = %h server (Samba, Ubuntu) encrypt passwords = No obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = Enter\snew\s\spassword:* %n\n Retype\snew\s\spassword:* %n\n password\supdated\ssuccessfully . username map = /etc/samba/smbusers unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 name resolve order = wins lmhosts host bcast dns proxy = No wins support = Yes usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d [printers] comment = All Printers path = /var/spool/samba create mask = 0700 guest ok = Yes printable = Yes browseable = No [CanonMG2100AIO] comment = Printer Drivers path = /var/lib/samba/printers read only = No guest ok = Yes [FreeAgent Drive] path = /media/FreeAgent Drive read only = No guest ok = Yes smbtree: Server requested plaintext password but 'client plaintext auth' is disabled anonymous failed session setup with NT_STATUS_INVALID_PARAMETER Server requested plaintext password but 'client plaintext auth' is disabled anonymous failed session setup with NT_STATUS_INVALID_PARAMETER and hostname: dekstop I know the spelling of desktop is incorrect. it was a duh moment. Any help would be greatly appreciated.

    Read the article

  • Weird 302 Redirects in Windows Azure

    - by Your DisplayName here!
    In IdentityServer I don’t use Forms Authentication but the session facility from WIF. That also means that I implemented my own redirect logic to a login page when needed. To achieve that I turned off the built-in authentication (authenticationMode="none") and added an Application_EndRequest handler that checks for 401s and does the redirect to my sign in route. The redirect only happens for web pages and not for web services. This all works fine in local IIS – but in the Azure Compute Emulator and Windows Azure many of my tests are failing and I suddenly see 302 status codes where I expected 401s (the web service calls). After some debugging kung-fu and enabling FREB I found out, that there is still the Forms Authentication module in effect turning 401s into 302s. My EndRequest handler never sees a 401 (despite turning forms auth off in config)! Not sure what’s going on (I suspect some inherited configuration that gets in my way here). Even if it shouldn’t be necessary, an explicit removal of the forms auth module from the module list fixed it, and I now have the same behavior in local IIS and Windows Azure. strange. <modules>   <remove name="FormsAuthentication" /> </modules> HTH Update: Brock ran into the same issue, and found the real reason. Read here.

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >