Search Results

Search found 8253 results on 331 pages for 'secure coding'.

Page 293/331 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • Getting a Script Error Every Time I Click a Link

    - by Flip4Life
    I have everything working perfectly on my site, but for some reason, I get an error message in console whenever I click a link anywhere on my site. The error has to do with this line of coding here: jQuery(function($){ $('.navbar a, .scroll a, .smoothscroll a').bind('click',function(event){ var $anchor = $(this); $('html, body').stop().animate({ scrollTop: $($anchor.attr('href')).offset().top }, 850,'easeInOutExpo'); event.preventDefault(); }); }); And the error I am getting is this: "SCRIPT5007: Unable to get value of the property 'top': object is null or undefined custom.min.js, line 6 character 197" The exact code it is highlighting is this part of the above code: $('html, body').stop().animate({ scrollTop: $($anchor.attr('href')).offset().top }, 850,'easeInOutExpo') All I know is that when I remove the above code, my scroll-to links stop working on pages such as these: http://www.northtownsremodeling.com/things-to-know.php You can see the popup error happen and stay in the console easily by going to a page with a filter like this: http://www.northtownsremodeling.com/bathroom/ And clicking one of the filter buttons. Ultimately, I am trying to make it so my scroll-to setting still works, but not have that error come up anymore. I made this script a long time ago, and I'm really confused as to what could be causing this error when everything is functioning perfectly otherwise? Thanks!

    Read the article

  • PHP PDO changes remote host to local hostname

    - by Wade Urry
    I'm trying to connect to a remote mysql server using PDO. However, regardless of the hostname or ip address i supply in the dsn, when the script is run it always reverts the address to the hostname of the local server where the webserver is running. Google suggests this could be something to do with SELinux and apaches ability to connect to remote databases, however i have SELinux disabled. Distro: Ubuntu 11.04 x64 Apache version: 2.2.17 PHP Version: PHP 5.3.5-1ubuntu7.11 with Suhosin-Patch (cli) Edit: Added code as requested. Though i dont believe this is an issue with my coding as it works fine on the local server, but doesnt allow remote connection. public function db_connect($driver, $dbhost, $dbname, $user, $pass) { $dsn = $driver . ':host=' . $dbhost . ';dbname=' . $dbname; try { $this->DB = new PDO($dsn, $user, $pass); } catch (PDOException $err) { print 'Database Connection Failed: ' . $err->getMessage(); die(); } } $remote_db = new DB('mysql', 'remote_server.domain.tld', 'database_name', 'user_name', 'password'); This is the error message i am receiving. Database Connection Failed: SQLSTATE[28000] [1045] Access denied for user 'user_name'@'local_server.domain.tld' (using password: YES)

    Read the article

  • OOP beginner: classB extends classA. classA already object. method in classB needed.. etc.

    - by Yvo
    Hey guys, I'm learning myself to go from function based PHP coding to OOP. And this is the situation: ClassA holds many basic tool methods (functions). it's __construct makes a DB connection. ClassB holds specific methods based on a certain activity (extract widgets). ClassB extends ClassA because it uses some of the basic tools in there e.g. a database call. In a php file I create a $a_class = new ClassA object (thus a new DB connection). Now I need a method in ClassB. I do $b_class = new ClassB; and call a method, which uses a method from it's parent:: ClassA. In this example, i'm having ClassA 'used' twice. Onces as object, and onces via a parent:: call, so ClassA creates another DB connection (or not?). So what is the best setup for this basic classes parent, child (extend) situation? I only want to make one connection of course? I don't like to forward the object to ClassB like this $b_class = new ClassB($a_object); or is that the best way? Thanks for thinking with me, and helping :d

    Read the article

  • C# Basic Multi-Threading Question: Call Method on Thread A from Thread B (Thread B started from Thre

    - by Nick
    What is the best way to accomplish this: The main thread (Thread A) creates two other threads (Thread B and Thread C). Threads B and C do heavy disk I/O and eventually need to pass in resources they created to Thread A to then call a method in an external DLL file which requires the thread that created it to be called correctly so only Thread A can call it. The only other time I ever used threads was in a Windows Forms application, and the invoke methods were just what I needed. This program does not use Windows Forms, and as such there are no Control.Invoke methods to use. I have noticed in my testing that if a variable is created in Thread A, I have no trouble accessing and modifying it from Thread B/C which seems very wrong to me. With Winforms, I was sure it threw errors for trying to access things created on other threads. I know it is unsafe to change things from multiple threads, but I really hoped .NET would forbid it altogether to ensure safe coding. Does .NET do this, and I am just missing the boat, or does it only do it with WinForm apps? Since it does seemingly allow this, do I do something like an OS would do, create a flag and monitor it from Thread A to see if it changes. If it does, then call the method. Doesnt the event handler essentially do this, so could an event be used somehow called on the main thread?

    Read the article

  • Eclipc java,writting a program [closed]

    - by ghassar
    I have an important exercise for that i found in the internet please i need help in using eclipc java thanks i have to design, implement, test and document a Java program (a set of classes) for one of the following problem specifications: Problem 1 – Jubilee Estate Agency Property Management System A local Estate Agent would like a prototype system to keep track of properties that are offered for sale. The Estate Agent sells domestic and commercial properties. You will need to define classes that represent the Estate Agency System. You should design your system and the classes that you will need before starting coding. Your system must have a graphical user interface and be designed and developed using the object-oriented principles of the MVC architecture design pattern i.e. the user interface class must be separate from the other classes. The initial basic requirements for the system are as follows: • Include a list of domestic properties for sale that include details of: address, description, selling price, and number of rooms • Include a list of commercial properties for sale that include details of: address, description, selling price, and area in square metres • Enable the properties that are for sale to be viewed on the screen • Allow the customer to select one or more properties to be placed on a ‘viewing list’ so that the properties can be visited in person • Display on the screen the viewing list that shows the details of the properties chosen • Provide a basic search facility to find properties that are for sale in a particular price band and display the results • Enable a property to be marked as sold

    Read the article

  • Can a pc with a 1.2 ghz dual core run VS 2013? [on hold]

    - by moo2
    Edit: This question is about a tool used for programming, Microsoft Visual Studio 2013. I already read the minimum requirements for VS 2013 before I posted or I wouldn't have asked the question. I searched on the web for an answer for a few hours before posting. Why should I go out and spend $400 on a laptop if I can spend less than $100 and use it to learn how to program? In the context of the question it is clear I was not asking for advice on what laptop to buy and it was also clear I was not asking for someone to walk me through the system requirements of VS 2013. This is a vast community and I was wondering if someone in this vast community would have experience in dealing with my question. In other words has anyone ever tried anything like it? If I had an old computer sitting around I would have tested it. At the moment all I have is a friend's Chromebook I'm borrowing. End Edit. I know the requirements for Microsoft Visual Studio 2013 state 1.6 ghz processor. I'm looking at getting a laptop for school and I found one that is old but cheap and would work for college. The Dell D430 I'm looking at getting has a 1.2ghz Core 2 Duo CPU, 80GB HD, 2GB RAM, and Windows 7 Home Premium. It's refurbished. I can get it for less than most phone's cost and from a Microsoft Authorized Refurbisher. I know it's not a skyrim rig but it's lightweight and can handle being a college laptop and doing word processing. Would Visual Studio 2013 run at all? If it is slower I'm not concerned. I just want to know if it would work and compile my assignments and run them? I'd be using this laptop for doing assignments and learning programming languages not for coding the next social media sensation.

    Read the article

  • Unable to start Tomcat6 with HTTPS enabled

    - by ram
    I have the following server.xml settings for my tomcat6 server <!-- COMMENTED <Connector port="8080" maxThreads="150" enableLookups="false" acceptCount="100" scheme="http" redirectPort="8443"/> --> <!-- COMMENTED <Connector port="80" maxThreads="150" enableLookups="false" acceptCount="100" scheme="http" redirectPort="443"/> --> <Connector port="443" maxHttpHeaderSize="8192" maxThreads="150" enableLookups="false" disableUploadTimeout="true" acceptCount="100" scheme="https" secure="true" SSLEnabled="true" SSLCertificateFile="%SSL_CERT%" SSLCertificateKeyFile="%SSL_KEY%" SSLCipherSuite="ALL:!ADH:!kEDH:!SSLv2:!EXPORT40:!EXP:!LOW" compression="on" compressableMimeType="text/html,text/xml,text/plain,application/javascript,application/json,text/javascript"/> Complete server.xml is here but when I try to start the application I get the following error in catalina.*.log file INFO: Initializing Coyote HTTP/1.1 on http-80 Apr 7, 2013 8:38:38 PM org.apache.coyote.http11.Http11AprProtocol init SEVERE: Error initializing endpoint java.lang.Exception: Invalid Server SSL Protocol (error:00000000:lib(0):func(0):reason(0)) at org.apache.tomcat.jni.SSLContext.make(Native Method) at org.apache.tomcat.util.net.AprEndpoint.init(AprEndpoint.java:729) at org.apache.coyote.http11.Http11AprProtocol.init(Http11AprProtocol.java:107) at org.apache.catalina.connector.Connector.initialize(Connector.java:1049) at org.apache.catalina.core.StandardService.initialize(StandardService.java:703) at org.apache.catalina.core.StandardServer.initialize(StandardServer.java:838) at org.apache.catalina.startup.Catalina.load(Catalina.java:538) at org.apache.catalina.startup.Catalina.load(Catalina.java:562) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:261) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) Apr 7, 2013 8:38:38 PM org.apache.catalina.core.StandardService initialize SEVERE: Failed to initialize connector [Connector[HTTP/1.1-443]] LifecycleException: Protocol handler initialization failed: java.lang.Exception: Invalid Server SSL Protocol (error:00000000:lib(0):func(0):reason(0)) at org.apache.catalina.connector.Connector.initialize(Connector.java:1051) at org.apache.catalina.core.StandardService.initialize(StandardService.java:703) at org.apache.catalina.core.StandardServer.initialize(StandardServer.java:838) at org.apache.catalina.startup.Catalina.load(Catalina.java:538) at org.apache.catalina.startup.Catalina.load(Catalina.java:562) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.load(Bootstrap.java:261) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) I've checked the following things already I have given read permissions for everyone for .crt and .key files I copied server.xml to a different working tomcat6 server and it works there, server.xml from the mentioned working tomcat5 webserver doesn't work here and it fails with the same error Works well with just HTTP enabled explicitly mentioning protocol in the Connector i.e. protocol="org.apache.coyote.http11.Http11AprProtocol" results in the same exception Please help me if I am missing something. Thanks in advance

    Read the article

  • pxe boot fails with message: no DEFAULT or UI configuration directive found

    - by spockaroo
    I am trying to pxe-boot a machine (client), and in the process I am trying to setup a tftp server that this machine can boot off. On the server, which runs Ubuntu 10.10, I have setup dhcp, dns, nfs, and tftp-hpa servers. All the servers/deamons start fine. I tested the tftp server by using a tftp client and downloading a file that the server directory hosts. My /etc/xinet.d/tftp looks like this service tftp { disable = no socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = -v -s /var/lib/tftpboot only_from = 10.1.0.0/24 interface = 10.1.0.1 } My /etc/default/tftpd-hpa looks like this RUN_DAEMON="yes" OPTIONS="-l -s /var/lib/tftpboot" TFTP_USERNAME="tftp" TFTP_DIRECTORY="/var/lib/tftpboot" TFTP_ADDRESS="0.0.0.0:69" TFTP_OPTIONS="--secure" My /var/lib/tftpboot/ directory looks like this initrd.img-2.6.35-25-generic-pae vmlinuz-2.6.35-25-generic-pae pxelinux.0 pxelinux.cfg -- default I did sudo chmod 644 /var/lib/tftpboot/pxelinux.cfg/default chmod 755 /var/lib/tftpboot/initrd.img-2.6.35-25-generic-pae chmod 755 /var/lib/tftpboot/vmlinuz-2.6.35-25-generic-pae /var/lib/tftpboot/pxelinux.cfg has the following contents SERIAL 0 19200 0 LABEL linux KERNEL vmlinuz-2.6.35-25-generic-pae APPEND root=/dev/nfs initrd=initrd.img-2.6.35-25-generic-pae nfsroot=10.1.0.1:/nfsroot ip=dhcp console=ttyS0,19200n8 rw I copied /var/lib/tftpboot/pxelinux.0 from /usr/lib/syslinux/ after installing the package syslinux-common. Also just for completeness, /etc/dhcp3/dhcpd.conf the following lines (relevant to this interface) subnet 10.1.0.0 netmask 255.255.255.0 { range 10.1.0.100 10.1.0.240; option routers 10.1.0.1; option broadcast-address 10.1.0.255; option domain-name-servers 10.1.0.1; filename "pxelinux.0"; } When I boot the client machine, and watch the output over the serial port, I notice that the client requests an ip address from the server and gets it. Then I see TFTP being displayed - indicating that it is trying to connect to the TFTP server. This succeeds, and I see TFTP.|, which return immediately displaying the following message PXELINUX 4.01 debian-20100714 Copyright (C) 1994-2010 H. Peter Anvin et al No DEFAULT or UI configuration directive found! boot: /var/log/syslog shows Feb 20 15:24:05 ch in.tftpd[2821]: tftp: client does not accept options What option is it talking about in the syslog? I assume it is referring to OPTIONS or TFTP_OPTIONS, but what am I doing wrong?

    Read the article

  • vSphere Client vCenter Template Customization Specification Using Windows Sysprep Unattended Answer XML File

    - by Brian
    I'm trying to setup a vSphere Client vCenter v5.0.0 Build 455964 Template Customization Specification using a Windows Sysprep unattended answer XML file for Win2008R2. However I didn't know how Sysprep worked before attempting this so it was a time-consuming nightmare (even after reviewing VMware vSphere ESXi 5's documentation)! I think I've figure out what I'm supposed to be doing, but it's still not working. The biggest problem at this point is that vSphere Client vCenter Customization Specification IP address information is not sticking when I load a Sysprep XML file with just 1 basic setting! This can only be a bug. Here is the process I'm using: PROCESS for Windows - vSphere Client Install Windows OS install VM Tools customize Windows (GPOs can be used to do this after deployment) install Applications (GPOs can be used to do this after deployment too) shutdown the VM convert the VM to a template create a custom Windows Sysprep XML answer file with desired customizations View Management Customization Specifications Manager create "New" Specification for "Target Virtual Machine OS" select Windows check "Use Custom Sysprep Answer File" (ADDS: Custom Sysprep File. KEEPS: Network (IP), Operating System Options (SID, Sysprep /generalize). REPLACES: Registration Information of Owner Name & Organization, Computer Name, Windows License (Key), Administrator Password, Time Zone, Run Once, Workgroup or Domain) name it as "VMwareCS-OS####R#x32/64w/Sysprep-TEST" (CS=Customization Specification) set Description as "Created YYYY/MM/DD by FLast" NEXT import a Sysprep answer file from secure location NEXT Custom settings NEXT click "..." box to right of "Use DHCP" set "Use the following IP settings:" for "IP Address" fill out the first 2 octets set appropriate values for other 2-3 fields set DNS server addresses OK NEXT check "Generate New Security ID (SID)" ALWAYS as template is likely a domain-member computer so it can be updated occasionally NEXT Finish View Inventory VMs and Templates right-click previously completed template Deploy Virtual Machine from this Template provide the new OS name (max15char) select inventory location NEXT select Host/Cluster (wait for validation to succeed) NEXT select Resource Pool (wait for validation to succeed) NEXT select Storage location NEXT check "Power on this virtual machine after creation" select "Customize using an existing customization specification" select desired specification select "Use the Customization Wizard to temporarily adjust the specification before deployment" NEXT NEXT Custom settings? NEXT check "Generate New Security ID (SID)" ALWAYS as template is likely a domain-member computer so it can be updated occasionally NEXT Finish Finish. I know a community member named "brian" (http://serverfault.com/users/25904/brian) has worked with this scenario before, but I couldn't figure out how to contact him directly, so Brian if you see this message could you provide some information to help? Thanks, Brian

    Read the article

  • Reality behind wireless security - the weakness of encrypting

    - by Cawas
    I welcome better key-wording here, both on tags and title, and I'll add more links as soon as possible. For some years I'm trying to conceive a wireless environment that I'd setup anywhere and advise for everyone, including from big enterprises to small home networks of 1 machine. I've always had the feeling using any kind of the so called "wireless security" methods is actually a bad design. I'm talking mostly about encrypting and pass-phrasing (which are actually two different concepts), since I won't even considering hiding SSID and mac filtering. I understand it's a natural way of thinking. With cable networking nobody can access the network unless they have access to the physical cable, so you're "secure" in the physical way. In a way, encrypting is for wireless what walling (building walls) is for the cables. And giving pass-phrases is adding a door with a key. But the cabling without encryption is also insecure. Someone just need to plugin and get your data! And while I can see the use for encrypting data, I don't think it's a security measure in wireless networks. As I said elsewhere, I believe we should encrypt only sensitive data regardless of wires. And passwords should be added to the users, always, not to wifi. For securing files, truly, best solution is backup. Sure all that doesn't happen that often, but I won't consider the most situations where people just don't care. I think there are enough situations where people actually care on using passwords on their OS users, so let's go with that in mind. For being able to break the walls or the door someone will need proper equipment such as a hammer or a master key of some kind. Same is true for breaking the wireless walls in the analogy. But, I'd say true data security is at another place. I keep promoting the Fonera concept as an instance. It opens up a free wifi port, if you choose so, and anyone can connect to the internet through that, without having any access to your LAN. It also uses a QoS which will never let your bandwidth drop from that public usage. That's security, and it's open. And who doesn't want to be able to use internet freely anywhere you can find wifi spots? I have 3G myself, but that's beyond the point here. If I have a wifi at home I want to let people freely use it for internet as to not be an hypocrite and even guests can easily access my files, just for reading access, so I don't need to keep setting up encryption and pass-phrases that are not whole compatible. I'll probably be bashed for promoting the non-usage of WPA 2 with AES or whatever, but I wanted to know from more experienced (super) users out there: what do you think? Is there really a need for encryption to have true wireless security?

    Read the article

  • Pros/Cons of switching from Exchange to GMail

    - by Brent
    We are a medium-large non-profit company, with around 1000 staff and volunteers, and have been using MS Exchange (currently 2003) for our mail system for years. I recently attended a Google conference where they were positing that "Cloud computing is the way of the future", and encouraging us to switch from doing our own email with Exchange, to using GMail and Google Apps for everything. Additionally, one of our departments has been pushing from inside to do this transition within their own department, if not throughout the entire organization. I can definitely see some benefits - such as: Archive space - we never seem to have the space our users want, and of course, the more we get, the more we have to back up OS Agnostic - Exchange is definitely built for windows, and with mac and linux users on the rise, these users increasingly demand better tools / support. Google offers this. Better archiving - potential of e-discovery, that doesn't exist in a practical way with our current setup. Switching would relieve us of a fair bit of server administration, give more options to our end users, and free up the server resources we are now using for Exchange. Our IT department wants to be perceived as providing up-to-date solutions to technical problems, and this change would definitely provide such an image. Google's infrastructure is obviously much more robust than ours, and they employ some of the world's best security and network experts. However, there are also some serious drawbacks: We would be essentially outsourcing one of our mission-critical systems to a 3rd party The switch would inevitably involve Google Apps and perhaps more as well. That means we would have a-lot more at the mercy of a single (potentially weak) password. (is there a way to make this more secure using a password plus physical key of some sort??) Our data would not remain under our roof - or even in our country (Canada). This obviously has plusses on the Disaster Recovery side, but I think there are potential negatives on the legal side. I can't imagine that somebody as large as Google would be as responsive as we would want with regard to non-critical issues such as tracing missing emails, etc. (not sure how much access we would have to basic mail logs - for instance) Can anyone help me evaluate this decision? What issues am I overlooking? What experiences have you had with this transition (or the opposite - gmail to Exchange) Can you add to the points I have already outlined?

    Read the article

  • multiple webapps in tomcat -- what is the optimal architecture?

    - by rvdb
    I am maintaining a growing base of mainly Cocoon-2.1-based web applications [http://cocoon.apache.org/2.1/], deployed in a Tomcat servlet container [http://tomcat.apache.org/], and proxied with an Apache http server [http://httpd.apache.org/docs/2.2/]. I am conceptually struggling with the best way to deploy multiple web applications in Tomcat. Since I'm not a Java programmer and we don't have any sysadmin staff I have to figure out myself what is the most sensible way to do this. My setup has evolved through 2 scenarios and I'm considering a third for maximal separation of the distinct webapps. [1] 1 Tomcat instance, 1 Cocoon instance, multiple webapps -tomcat |_ webapps |_ webapp1 |_ webapp2 |_ webapp[n] |_ WEB-INF (with Cocoon libs) This was my first approach: just drop all web applications inside a single Cocoon webapps folder inside a single Tomcat container. This seemed to run fine, I did not encounter any memory issues. However, this poses a maintainability drawback, as some Cocoon components are subject to updates, which often affect the webapp coding. Hence, updating Cocoon becomes unwieldy: since all webapps share the same pool of Cocoon components, updating one of them would require the code in all web applications to be updated simultaneously. In order to isolate the web applications, I moved to the second scenario. [2] 1 Tomcat instance, each webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp[n] |_ WEB-INF (with Cocoon libs) This approach separates all webapps into their own Cocoon environment, run inside a single Tomcat container. In theory, this works fine: all webapps can be updated independently. However, this soon results in PermGenSpace errors. It seemed that I could manage the problem by increasing memory allocation for Tomcat, but I realise this isn't a structural solution, and that overloading a single Tomcat in this way is prone to future memory errors. This set me thinking about the third scenario. [3] multiple Tomcat instances, each with a single webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp2 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp[n] |_ WEB-INF (with Cocoon libs) I haven't tried this approach, but am thinking of the $CATALINA_BASE variable. A single Tomcat distribution can be multiply instanciated with different $CATALINA_BASE environments, each pointing to a Cocoon instance with its own webapp. I wonder whether such an approach could avoid the structural memory-related problems of approach [2], or will the same issues apply? On the other hand, this approach would complicate management of the Apache http frontend, as it will require the AJP connectors of the different Tomcat instances to be listening at different ports. Hence, Apache's worker configuration has to be updated and reloaded whenever a new webapp (in its own Tomcat instance) is added. And there seems no way to reload worker.properties without restarting the entire Apache http server. Is there perhaps another / more dynamic way of 'modularizing' multiple Tomcat-served webapps, or can one of these scenarios be refined? Any thoughts, suggestions, advice much appreciated. Ron

    Read the article

  • Using WSUS Admin Console from outside domain

    - by Nick
    Environment: I have a workstation on our primary domain. We have a primary WSUS Server that is the upstream server of 8 different testing domains. The Primary WSUS server is not part of any domain. Routing is configured between my workstation and the Primary WSUS server. I can RDP to the Primary WSUS sever without any problem. The router is configured to forward any any between my workstation and the Primary WSUS server. This WSUS server cannot be part of a domain due to external requirements (I can't change them) on the lab I work in. The version of WSUS is WSUS 3.0 SP 2 What I want to do: I need to connect to the WSUS server with the WSUS Admin console from my local workstation. The end goal is to connect via Powershell and manage with that. I also need to take what I do here and port it to the 8 test domains so I can manage those WSUS servers. The routing is all in place so I can talk to the servers, it's just connecting to the WSUS console that is causing problems. The problem: I cannot get my workstation to connect to the WSUS Console. I get one of the following errors depending on the setup. 1st error: Cannot connect to 'WSUS'. You do not have the permissions required to access this WSUS server. To connect to the server you must be a member of the WSUS Administrators or WSUS Reporters security groups I also get the warning 7012 from the event log that says the same thing. 2nd error: Cannot connect to 'WSUS'. The server may be using another port or different Secure Sockets Layer setting. What I have tried: So far I have configured IIS for Anonymous Authentication on both the WSUS Administration and ApiRemoting30 using an account will call WSUS_User. With this in place, I get the 1st error. When I do this though, the local WSUS Console cannot be used either. Reverting back to only Windows Authentication allows the local console to work, but the remote console now give the 2nd error. I have confirmed the port, and that there is no SSL in use (which is a policy that is pushed from above, that I cannot effect). I have placed WSUS_User in the groups mentioned above, but it still does not connect. I made sure WSUS_User has full access on C:\Program Files\Update Services and C:\Program Files\Update Services\WebServices I am not very familiar with the workings of WSUS or IIS, and have gone as far as I can figure out on my own. Googling these errors all take me to the same steps about Anonymous Authentication and configuring permissions on folders. Note: I have cross-posted this to StackOverflow as well.

    Read the article

  • What is a good layout for a somewhat advanced home network and storage solution?

    - by Shaun
    My home network/storage needs are changing and I am searching for some opinions and starting points on what a good network/storage layout would be that can serve my needs for a few years into the future. I think I have a decent starting point for equipment, but I am also willing to invest fairly heavily in a solution that can last me for a while. I am a bit of a tech nerd and I have a moderate tolerance for setup of the solution. I would prefer if maintenance of the system is somewhat low once it is setup, but I am willing to accept some tradeoffs. Existing equipment: Router - Netgear WNDR3700 (gigabit) Router - DLink Gamerlounge DGL-4300 (gigabit) Switch - 16 port Trendnet green switch (gigabit) Switch - 5 port Trendnet green (gigabit) Computer - i7-950 office computer (gigabit ethernet) Computer - Q6600 quad core media center, hooked up to TV, records shows (gigabit ethernet) Computer - Acer 1810T ultraportable laptop (gigabit and N ethernet) NAS - Intel SS4200-E (gigabit) External hard drive - 2TB WD Green drive (esata) All kinds of miscellaneous network connected TV, Bluray, Verizon network extender, HDhomerun TV tuners, etc. Requirements: -Robust backup solution for a growing collection of huge family picture files and personal files, around 1.5TB. (Including offsite backup) -Central location for all user's files, while also keeping them secure from each other. -Storage for terabytes of movie backups and recorded TV, and access to them from all computers (maybe around 4TB eventually) -Possibility to host files to friends and family easily Nice to have: -Backup of terabytes of movie backups Intriguing possibilities: -Capability to have users' Windows desktops and files look the same from all network computers I am not sure if the new Windows Home Server 2011 would fit into this well, if I need a domain server, how best to organize my backups, or how to most effectively use RAID. Currently I am simply backing up all computers to a RAID 1 on the NAS box, which I was thinking could prevent a situation where I reach for a backup and find that the disk is corrupt. One possibility that I am thinking about now is simply using my media center PC with a huge RAID of hard drives on which all files are stored. Pseudo-backup of all files would be present because of the RAID, but important files would also be backed up off site via carrying hard drives to work. But what if corruption seeps into the files and the corrupted data is then backed up? Does RAID protect against this? I really want to take next to zero risks with the irreplaceable files. I can handle some degree of risk with the movies and other files. I'm looking for critiques on this idea as well as other possibilities. To summarize, my goal is high functionality, media capable, and robust backup of irreplaceable files.

    Read the article

  • Linux Debian Security Breach - what now? [closed]

    - by user897075
    Possible Duplicate: My server's been hacked EMERGENCY I installed Debian (Squeeze) a while back in my home network to host some personal sites (thank god). During the installation it prompted me to enter a user other than root - so in a rush I used my name as user and pass (alex/alex for what its worth). I know it's horrible practice but during the setup of this server I'm always logged in as root to perform configurations, etc. Few days or a week passes and I forget to change the password. Then I finally get my web site finished and I open the port forwarding on my router and DynDNS to point to my server in my home. I've done this many times in the past never had issues but I use a cryptic root password and I guess disabled regular accounts. Today I reformat my Windows 7 and after spending all day tweaking and updating SP1 I look for cloning apps and find clonezilla and see it supports SSH cloning, so I go through the process only to discover I need a user, so I log into my web-server and see I have the user 'alex' already in and realize I don't know the password. So I change the password to something cryptic and visit the directory 'home' only to realize their are contents such as passfile, bengos, etc. My heart sinks, I've been hacked!!! Sure as hell there are all sort of scripts and password files. I run a 'last' command and it seems they last logged in april 3rd. Question: What can I do to see if they did anything destructive? Should I reformat and reinstall? How restrictive is Debian/Squeeze in terms of user permissions out of the box - all my personal website stuff was created using 'root' so changing files does not seem to have occured. How did they determine there was a user 'alex' on the machine? Can you query any machine and figure this out? What the users are? Looks like they tried to run a IP scan...other nodes on the network are running Windows 7. One of which seems a little wonky as of late - is it possible they buggered up that system? What corrective action can I take to avoid this from happening again? And figure out what might have changed or been hacked? I'm hoping debian out of box is fairly secure and at best he managed to read some of my source code. :p Regards, Alex

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

  • Stop squid caching 302 and 307 with deny_info

    - by 0xception
    TLDR: 302, 307 and Error pages are being cached. Need to force a refresh of the content. Long version: I've setup a very minimal squid instance running on a gateway which shouldn't not cache ANYTHING but needs to be solely used as a domain based web filter. I'm using another application which redirects un-authenticated users to the proxy which then uses the deny_info option redirects any non-whitelisted request to the login page. After the user has authenticated the firewall rule gets placed so they no longer get sent to the proxy. The problem is that when a user hits a website (xkcd.com) they are unauthenticated so they get redirected via the firewall: iptables -A unknown-user -t nat -p tcp --dport 80 -j REDIRECT --to-port 39135 to the proxy at this point squid redirects the user to the login page using a 302 (i've also tried 307, and i've also make sure the headers are set to no-cache and/or no-store for Cache-Control and Pragma). Then when the user logs into the system they get firewall rule which no longer directs them to the squid proxy. But if they go to xkcd.com again they will have the original redirection page cached and will once again get the login page. Any idea how to force these redirects to NOT be cached by the browser? Perhaps this is a problem w/ the browsers and not squid, but not sure how to get around it. Full squid config below. # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl localnet src 192.168.182.0/23 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl https port 443 acl http port 80 acl CONNECT method CONNECT # # Disable Cache # cache deny all via off negative_ttl 0 seconds refresh_all_ims on #error_default_language en # Allow manager access only from localhost http_access allow manager localhost http_access deny manager # Deny access to anything other then http http_access deny !http # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !https visible_hostname gate.ovatn.net # Disable memory pooling memory_pools off # Never use neigh cache objects for cgi-bin scripts hierarchy_stoplist cgi-bin ? # # URL rewrite Test Settings # #acl whitelist dstdomain "/etc/squid/domains-pre.lst" #url_rewrite_program /usr/lib/squid/redirector #url_rewrite_access allow !whitelist #url_rewrite_children 5 startup=0 idle=1 concurrency=0 #http_access allow all # # Deny Info Error Test # acl whitelist dstdomain "/etc/squid/domains-pre.lst" deny_info http://login.domain.com/ whitelist #deny_info ERR_ACCESS_DENIED whitelist http_access deny !whitelist http_access allow whitelist http_port 39135 transparent ## Debug Values access_log /var/log/squid/access-pre.log cache_log /var/log/squid/cache-pre.log # Production Values #access_log /dev/null #cache_log /dev/null # Set PID file pid_filename /var/run/gatekeeper-pre.pid SOLUTION: I believe I might have found a solution to this. After days and days trying to figure it out, only through a random stumble I found client_persistent_connections off server_persistent_connections off This did the trick. So it wasn't so much cache as it was a single persistent connection messing things up. W000T!

    Read the article

  • Preinstalled Windows 8 and Linux UEFI dual boot on a laptop

    - by itchy355
    I am trying to set up Windows 8 and Arch Linux on a new Sony Vaio E14 with preinstalled windows 8. So far: installed W8 to my new SSD (switched for the original HDD) using Recovery Media shrunk the W8 partition, deleted recovery partition, disabled swap confirmed W8 booting just fine On to Arch: disabled Secure Boot in bios confirmed W8 booting just fine Booted Arch off the CD and installed everything to 4th and 5th partition set up rEFInd for EFIstub kernel bootloader After that it got worse. I was unable to boot anything else than Windows 8 (although I was glad that they at least kept working just fine). Tried: creating EFI\refind\ and putting the .efi there (as per Arch manual overwriting EFI\boot\bootx64.efi overwriting EFI\Microsoft\Boot\bootmgr.efi overwriting EFI\Microsoft\Boot\bootmgfw.efi --- YAY rEFInd shown up! So far, so good. I've kept the whole W8 Boot\ directory in EFI\windows8 and set up a boot menuentry for it; and it booted just fine. But, upon restart, everything was wrong -- 'Operating system not found' instead of any bootloader (refind or w8). Booted back into Arch using the live CD to find out that the EFI partition had erroneous FAT table. fsck.vfat fixed it, and I've found that EFI\Microsoft\Boot was back to it's original state (all refind files deleted and replaced with W8 bootloaders). I've overwritten them again and got back to rEFInd showing up correctly and Arch being perfectly bootable. After that I've tried only renaming EFI\Microsoft\Boot\bootmgfw.efi to bootmgfw.001.efi (then copying refind's .efi to bootmgfw.efi and keeping EVERY OTHER file as it was), but with exactly the same result. Tried marking the GPT EFI partition as read-only, same result. Now I'm kinda out of luck. Arch boots fine, so does W8 but it destroys the EFI partition in the process. Thanks for any ideas, Googling brought me this far and I can't find any better. PS -- windows 8 MAYBE destroys the partition upon shutdown -- when I order a shutdown in W8, it takes unusually long (about half a minute instead of ~5 seconds). So in theory I could solve this by hard-resetting the laptop instead of a normal shutdown, but that's just not nice.

    Read the article

  • web services access not being reached thru the web browser

    - by Tony
    I am trying to reference my .asmx webservices in .NET but my server is not exposed to the internet. When I put on the following address I get the message mentioned below. What's the reason for not being able to see the directory? Am I missing something in my IIS configuraction? Am I missing anything in my permissions? Just as reference I have other folders with webservices and I have the same issue. When I login to the server I am doing it with my windows user and password (I am using windows authentication). It's necessary to mention that when I put the URL I am getting a popup screen to put in my userid and password but it seems that's not able to validate since keeps asking me a couple of times. Let me know if you need more information to address this issue . http://appsvr02/Inetpub/wwwroot/DevWebApi/ Internet Explorer cannot display the webpage What you can try: It appears you are connected to the Internet, but you might want to try to reconnect to the Internet. Retype the address. Go back to the previous page. Most likely causes: •You are not connected to the Internet. •The website is encountering problems. •There might be a typing error in the address. More information This problem can be caused by a variety of issues, including: •Internet connectivity has been lost. •The website is temporarily unavailable. •The Domain Name Server (DNS) is not reachable. •The Domain Name Server (DNS) does not have a listing for the website's domain. •If this is an HTTPS (secure) address, click tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. For offline users You can still view subscribed feeds and some recently viewed webpages. To view subscribed feeds 1.Click the Favorites Center button , click Feeds, and then click the feed you want to view. To view recently visited webpages (might not work on all pages) 1.Click Tools , and then click Work Offline. 2.Click the Favorites Center button , click History, and then click the page you want to view.

    Read the article

  • Squid Proxy: url_regex acl is not working?

    - by bharathi
    I am using squid proxy 3.1 in ubuntu machine. I want to allow only urls matching our pattern through our proxy server. I configured acl like below. Acl for dstdomain is working fine. If i access any url besides .zmedia.com , I got proxy connection refused. But the url_regex is not working. What i am trying here is. Allow only request from ".zmedia.com" domain and the request url should be in "/blog" context. # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 ::1 acl urlwhitelist url_regex -i ^http(s)://([a-zA-Z]+).zmedia.com/blog/.*$ acl allowdomain dstdomain .zmedia.com acl Safe_ports port 80 8080 8500 7272 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl SSL_ports port 7272 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager http_access deny !allowdomain http_access allow urlwhitelist http_access allow CONNECT SSL_ports http_access deny CONNECT !SSL_ports # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port 3128 # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. #cache_dir ufs /var/spool/squid 100 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid append_domain .zmedia.com # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 Please correct me , If i did anything wrong?

    Read the article

  • How can I simulate blocking RTMP over port 80 on Windows?

    - by Christian Nunciato
    It seems like this should be so simple, but since this isn't my area of expertise, I'm having a hell of a time figuring out how to do it. Basically, I have a Flash app and I'm connecting to a Flash Media Server to stream some content. The URL I'm using to do this, for example, looks like this: rtmp://someserver.com/some/path/mp3:somefile Everything works -- but that's sort of the problem. When I'm trying to do is simulate my users attempting to play back my media under more restrictive conditions than the ones I have here (i.e., none) -- namely being stuck behind firewalls or proxy servers that block access to RTMP streams. Flash, according to Adobe, is equipped to handle proxy servers and firewalls automatically, like so (from the docs): When you do not specify a port number in an RTMP address, Flash will attempt to connect to port 1935. If it fails it will then try to connect to port 443; if that fails, it will try port 80. [And if that fails, it will attempt to connect via RTMPT (i.e., HTTP tunneling) on port 80.] So no coding is required to access ports 1935, 443, or port 80 if you do not specify a port in the RTMP address. The problem I'm having is setting up a reliable environment in which to test that this behavior actually happens. I'm on a Windows machine, for example, so with Windows Firewall, I can block certain ports and protocols (1935, 443), but I don't want to block port 80, because the final fallback protocol (RTMPT) is supposed to run on port 80, and Windows Firewall only gives me enough granularity (as far as I know, anyway) to block "all outbound TCP traffic to remote port 80" -- that is, I can't, apparently, block "all outbound RTMP traffic to port 80" while leaving RTMPT traffic to port 80 unaffected. My understanding thus far is that I'll probably need to set up a proxy server to do this. Is this correct? Or is there a simpler way (on Win 7, at least) to filter out RTMP to 1935, RTMP to 443, RTMP to 80, but still allow RTMPT to 80 (where all four hostnames are identical)? And if I do have to set up a proxy server, what's the simplest way to go on Windows? I've set up WinProxy, which seems a bit janky but apparently works -- but then what I can't figure out is how to tell Windows to force all TCP traffic (including RTMP, RTMPT and HTTO) through this proxy server so I can turn around and reject the requests for RTMP. Any help would be hugely appreciated. This isn't my realm of expertise and I've alreasdy spent more time on it than I probably should. :)

    Read the article

  • Having troubles connectiong Magento to external Windows Database Server using Windows Azure

    - by Kevin H
    "I tried to make this easy to read through" I am using Ubuntu 12.04 LTS for Magento and installed these commands onto the system: sudo apt-get install apache2 sudo apt-get install php5 libapache2-mod-php5 sudo apt-get install php5-mysql sudo apt-get install php5-curl php5-mcrypt php5-gd php5-common sudo apt-get install php5-gd I used Windows Server 2008 R2 August 2012 for Mysql Server For a reference, I used http://www.windowsazure.com/en-us/manage/windows/common-tasks/install-mysql/ When the server was setup, I added an empty disk to it Then, I added endpoints 3306 Next I accessed the server remotely After that, I formatted the empty disk and was inserted as F: Next I downloaded Mysql from http://*.mysql.com version Windows (x86, 64-bit), MSI Installer 5.5.28 In the installation process, I used these settings: Typical Setup - Clicked Next, install, next Chose Detailed Configuration - Clicked next Chose Dedicated MySQL Server Machine - Clicked Next Chose Transactional Database Only - Clicked Next Chose the "F:" Drive - Clicked Next Chose Online Transactional Processing (OLTP) - Clicked Next For Networking Options, I checkmarked 'Enable TCP/IP Networking" 'Add firewall exception for this port' 'Enable Strict Mode' - Clicked Next Chose Standard Character Set - Clicked Next For Windows Options, I checkedmarked 'Install as Window Service" 'Launch the MySQL Server automatically' 'Include Bin Directory in Windows PATH - Clicked Next For Security Options, I checkmarked 'Modify Security Settings' and set root password - Clicked Next Finally clicked Execute and Finish These are the Firewall Setting that I set I clicked inbound rules Properties Scope Allow IP Address and used the internal Address for Magento Server Clicked Apply and exited Next, I opened up MySQL 5.x Command Line Client Entered Root Password Then entered these commands mysql create database magento; mysql Create user magentouser identified by 'password'; mysql Grant select, insert, create, alter, update, delete, lock tables on magento.* to magentouser mysql exit Finally, I opened up the Magento Downloader Magento validation has approved all PHP version is right. Your version is 5.3.10-1ubuntu3.4. PHP Extension curl is loaded PHP Extension dom is loaded PHP Extension gd is loaded PHP Extension hash is loaded PHP Extension iconv is loaded PHP Extension mcrypt is loaded PHP Extension pcre is loaded PHP Extension pdo is loaded PHP Extension pdo_mysql is loaded PHP Extension simplexml is loaded These are all installed on Magento Server For the Database Connection, I used: The Database server only has MySQL 5.5 Server installed on it Host - Internal IP address User Name - The User I created when setting up database Password - The Password I created when setting up database For the password, I did some research and found out that Magento only accepts alphanumeric, so I went and set it up again and used only alphanumeric for the User password Now, I am still getting Accessed denied for database Connection. Also, I have tryed to setup mysql on independant Linux Server but kept getting errors. When, I found the solution. Wouldn't work, so I decided to try Windows. These is the questions, I have been asking and researching to debug this issue Is it because I am using Linux for magento and Windows for Database. I have had no luck in finding a reason why this wouldn't work There must be something, I am missing I also researched the difference between linux sql databases and windows sql databases but have not come to conclusion, if installing Mysql on windows would make a difference in syntax and coding. I have spent a lot of time looking into this and need some help with direction on how to complete my project. Any type of help would be appreciated.

    Read the article

  • Authenticate by libpam-mysql and libnss-mysql (CentOS)

    - by Chris
    I'm trying to get MySQL to function as a backend for authenticating users on CentOS 6.3. So far I have successfully installed and configured libnss-mysql. I can test this by doing: # groups testuser testuser : sftp Testuser is a member of the sftp group in fact, all MySQL based useraccounts will be hardcoded to it. The sftp group is chrooted and forced to use internal-sftp so they cannot do anything but access their home directory. Then I configured pam-mysql and PAM to allow mysql logins. This also works.. When SELinux is not enforcing. When I do setenforce 1 users can no longer login. Error: Permission denied, please try again. This is my pam_mysql.conf file: users.host=localhost users.db_user=nss-pam-user users.db_passwd=*********** users.database=sftpusers users.table=users users.user_column=username users.password_column=password users.password_crypt=6 verbose=1 My /etc/pam.d/sshd: #%PAM-1.0 auth sufficient pam_sepermit.so auth include password-auth auth required pam_mysql.so config_file=/etc/pam_mysql.conf account sufficient pam_nologin.so account include password-auth account required pam_mysql.so config_file=/etc/pam_mysql.conf password include password-auth # pam_selinux.so close should be the first session rule session required pam_selinux.so close session required pam_loginuid.so # pam_selinux.so open should only be followed by sessions to be executed in the user context session required pam_selinux.so open env_params session optional pam_keyinit.so force revoke session include password-auth And to be complete the contents of some log files.. /var/logs/secure Nov 20 14:52:20 hostname unix_chkpwd[4891]: check pass; user unknown Nov 20 14:52:20 hostname unix_chkpwd[4891]: password check failed for user (testuser) Nov 20 14:52:20 hostname sshd[4880]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.10.107 user=testuser Nov 20 14:52:22 sftpusers sshd[4880]: Failed password for testuser from 192.168.10.107 port 51849 ssh2 /var/logs/audit/audit.log type=USER_AUTH msg=audit(1353420107.070:812): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=pubkey acct="testuser" exe="/usr/sbin/sshd" hostname=? addr=192.168.10.107 terminal=ssh res=failed' type=USER_AUTH msg=audit(1353420112.312:813): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=PAM:authentication acct="testuser" exe="/usr/sbin/sshd" hostname=192.168.10.107 addr=192.168.10.107 terminal=ssh res=failed' type=USER_AUTH msg=audit(1353420112.456:814): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=password acct="testuser" exe="/usr/sbin/sshd" hostname=? addr=192.168.10.107 terminal=ssh res=failed' I tried to let audit2why explain the problem but it remains silent even though there are some errors. Does anyone see the problem? Thanks! EDIT: Turns out it's almost working with setenforce 0 I can mkdir foobar but if I do a single ls I get an error: Received message too long 16777216

    Read the article

  • Win 8: Adding a boot volume to an MBR dynamic disk [NOT about changing to basic disks]

    - by Stilez
    (This is NOT aiming to convert to basic disk. In this question, the disk stays dynamic but becomes bootable) There doesn't seem to be a clear, well stated answer I can find, for the question "What are the criteria for Windows 8 to successfully boot from an MBR dynamic disk", or "how do I fix a dynamic MBR partition that's failing boot"? I've tried to educate myself but can't find crucial information to clear it all up. My existing HDD/SSD setup: DISK 0 ~ 60GB SSD/MBR/basic: (350MB recovery)(60GB windows 8 bootable) DISK 1 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(60GB unallocated)(410GB mirrored data) DISK 2 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(60GB unallocated)(410GB mirrored data) DISKS 3, 4, 5: (ignored for simplicity: 2xHDD RAID1 + caching SSD) I'm heavy duty on data crunching and virtualisation, just maxxed out 32GB RAM @ 2133 and moved to 4960X + 64GB. Disk 0 is a pure system disk of little value, and virtualisations runs off mirrored SSDs (Samsung 840 Pro 512 x 2) for double speed reading and so they snapshot in reasonable time. I'm using 4 SATA3 ports and the board only has two decent Intel ports (onboard Marvell are poorer quality). I'm wary of choosing between LSI, HighPoint and other 3rd party controllers as I'm unfamiliar with the maze of decent RAID cards (that's a whole other issue!). I want to cut down my SSD needs by moving the boot volume and caching volume to the 840 pros, giving a setup with 2 fewer SSDs: DISK 0 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(60GB boot)(410GB mirrored data) DISK 1 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(30GB cache for the ICH10R mirror)(30GB temp)(410GB mirrored data) DISKS 2, 3: (2xHDD RAID1) Intel's RST allows this, Win 8 allows booting off a MBR/dynamic disk, and the two 60GB SSDs are hardly the fastest SSDs anyway, they'll get repurposed. Moving the caching volume is easy. Moving the boot volume has me stumped. The difficulty is, I'm hitting a wall of knowledge here. I have a UEFI Asus motherboard with an previous traditional MBR/basic boot disk, and I want it to boot from a disk and volume that's MBR/dynamic. The disk copy is physically ok (Partition Wizard Server will copy to dynamic volumes) but then hits a light blue 0xc000000e boot error. No real surprise, I expected to have some boot fixing, but had expected Windows to boot-fix it (all drivers exist), or the usual manual fixes to work. Specifically, I don't know enough, to know what's got to be manually checked and perhaps corrected for the disk to boot (legacy/uefi/bios, odd partitions, boot tables, disk IDs, hidden boot files, oh my!), or if I need to change any of this secure boot/UEFI/legacy stuff in the bios, convert a 512 SSD to basic and then back to dynamic when working, or if the issue is pure OS config using "diskpart", "bootsect" and "bootrec" from the Win8 DVD. The old system disk still boots but I don't know enough to figure what to fix, to make the system boot as I want. The answers probably aren't hard but the real issue is my confusion and missing information. Thanks for helping!

    Read the article

  • T60 Screen/LCD gets black after some minutes with a highpitched sound rising and fading

    - by Edward De Leau
    Just now my T60 screen got "black" (so no display). On my second monitor: no problems so the vga output works. symptom: Screen blanks / no display but works on second monitor steps to reproduce: - boot - wait (it does not matter what you do you do not have to login or anything) - (now the monitor of the laptop slowly begins to make a ssssssssHHHHHHHHHHHHHHHHHWOEOEssssssss noice of about 10 seconds) - right after the sounds ends the monitor gets black times seem to be the same each time. software: installed no new software before/after, running zone alarm and antivirus. other: it does not feel hot in any place, there dont seem to be running processes with strange behaviour. warranty: out of warranty what was i doing: typing text on a website and doing some php coding in a text editor. Anyone any idea what I can do here other than buy a new laptop? / does it sound familiar to known cases? update: * exactly the same problem: http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/T60-Screen-Blackout/m-p/288772 and the second poster (garyj) here: http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/Black-Screen-on-T60/m-p/235053#M48627 and here: "i have that same problem. i replaced the CCRL on mine and it works fine when the screen is not screwed in. once the frame of the LCD screen (metal portion) touches the metal on the laptop which holds the screen the screen goes black. If the metal is touching the screen when you boot up it boots up with it being very dimmly lit. " from http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/T60-screen-problems/m-p/205047#M44995 (it seems replacing the lcd is no use, he tried it 3 times). same problem: http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/T60-black-screen/m-p/80604#M25914 Hmmm... not handy 3 or 4 months ago I ordered and installed a new fan. Now the LCD. Which does not seem the core isse but some electric issue so it seems replacing the LCD is not the thing todo here. I have the following question: If it is not the LCD that needs to be replaced (see other threads) which parts can I order to fix this? Has anyone got any information which could lead me to identify the issue? I have read replacing the "inverter" AND the "backlightning" would that make sense?

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >