Search Results

Search found 46416 results on 1857 pages for 'access log'.

Page 414/1857 | < Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >

  • Mac OS X Snow Leopard: permissions changed on /var results in dns lookup issues

    - by Ivan
    I was attempting to solve an issue ("/var/log/msmtp.log: permissions denied" error when attempting to send mail using msmtp) when I did this: > chmod -R 770 /var After that, my machine would not resolve domain names via cURL. (ping also fails) But, oddly, I can enter domain names into Safari and visit any web pages w/o a problem... I'm actually not sure if the chmod command is the cause of the problem, but I suspect it is. Also, if I ls -l on /var (or /private/var) it doesn't seem that any of the subdirectories or files there actually changed permission, but there are many, so I can't say that conclusively... Incidentally, I fixed the original error (msmtp.log permission denied) by setting TMPDIR=/tmp in my local environment (bash). Now the error goes away, but I get this error: msmtp: cannot locate host domainname.org: nodename nor servname provided, or not known Any ideas about how to go about getting DNS working again?

    Read the article

  • How these files can be accessed?

    - by harsh.singla
    The files can be accessed from every artifact, such as .bpel, .mplan, .task, .xsl, .wsdl etc., of the composite. 'oramds' protocol is used to access these files. You need to setup your adf-config.xml file in your dev environment or Jdeveloper to access these files from MDS. Here is the sample adf-config.xml. xmlns:sec="http://xmlns.oracle.com/adf/security/config" name="jdbc-url"/ name="metadata-path"/ credentialStoreLocation="../../src/META-INF/jps-config.xml"/ This adf-config.xml is located in directory named .adf/META-INF, which is in the application home of your project. Application home is the directory where .jws file of you application exists. Other than setting this file, you need not make any other changes in your project or composite to access MDS. After setting this up, you can create a new SOA-MDS connection in your Jdev. This enables you to have a resource pallet in which you can browse and choose the required file from MDS.

    Read the article

  • Backing Up Transaction Logs to Tape?

    - by David Stein
    I'm about to put my database in Full Recovery Model and start taking transaction log backups. I am taking a full nightly backup to another server and later in the evening this file and many others are backed up to tape. My question is this. I will take hourly (or more if necessary) t-log backups and store them on the other server as well. However, if my full backups are passing DBCC and integrity checks, do I need to put my T-Logs on tape? If someone wants point in time recovery to yesterday at 2pm, I would need the previous full backup and the transaction logs. However, other than that case, if I know my full back ups are good, is there value in keeping the previous day's transaction log backups?

    Read the article

  • Speed up loading of test results from builds in Visual Studio

    - by Jakob Ehn
    I still see people complaining about the long time it takes to load test results from a TFS build in Visual Studio. And they make a valid point, it does take a very long time to load the test results, even for a small number of tests. The reason for this is that the test results is not just the result of the test run but also all the binaries that were part of the test run. This often also means that the debug symbols (*.pdb) will be downloaded to your local machine. This reason for this behaviour is that it letsyou re-run the tests locally. However, most of the times this is not what the developer will do, they just want to know which tests failed and why. They can then fix the tests and rerun them locally. It turns out there is a way to load only the test results, which is much faster. The only tricky bit is to find the location of the .trx file that is generated during the build. Particularly in TFS 2010 where you often have multiple build agents, which of corse results in different paths to the trx file. Note: To use this you must have read permission to the build folder on the build agent where the build was executed. Open the build result for the build Click View Log Locate the part where MSTest is invoked. When using test containers, it looks like this:   Note: You can actually search in the log window, press Ctrl+F and you will get a little search box at the bottom. Nice! On the MSTest command line call, locate the /resultsfileroot parameter, which points to the folder where the test results are stored Note that this path is local for the build server, so you need to replace the drive letter with the server name: D:\Builds\Project\TestResults to \Project\TestResults">\\<BuildServer>\Project\TestResults Double-click on the .trx file and you will notice that it loads much faster compared to opening it from the build log window

    Read the article

  • Broadcom BCM4313 wireless slow and high-latency

    - by Florin Andrei
    Ubuntu 12.10 64 bit on a Dell Latitude E6330 laptop. Wireless is pretty slow. It gets connected quick enough, but then it acts like a dialup connection. My ssh sessions over WiFi are slow and laggy. Even browsing is slow, the pages are loading like it's 1998. This does not depend on the access point, it's the same both at home and at work. Other systems work fine on these access points. I had an older Dell laptop before, different WiFi hardware, and it was much faster over the same wireless access points. Is this a known issue with this hardware? If so, any solutions?

    Read the article

  • Unity Greeter login screen cuts off login options

    - by ammianus
    I have a pretty newly installed Ubuntu 12.04, using Unity. My external monitor is 1920x1080 max resolution. In the Unity desktop itself everything looks great. I have an NVidia graphics card. When I start my computer and get to the Unity greeter login screen the display is oddly formatted and the resolution seems off. It looks like a zoomed view on the larger 1920x1080 screen. As such it crops the login options off to the left hand side of the screen. So I can only just see the edge of the password box for the user I want to log in with. I can log in with one account by default by blindly typing the password, but I am unable to switch to other accounts. Is there anything I can do to fix the log in screen display so that I can see the normal login options? Note: I first noticed it when I changed my desktop background and the next time I logged in I saw the issue.

    Read the article

  • check what process was causing the problem of high cpu load

    - by linuxk
    I'm running nginx wordpress server in KVM using 12.04 server x86. It was running very well about 4 month until 2 hours ago. I found that my website is down and no ping response. Virt-manager logged high cpu load(plz see the picture below) before unexpected shut down. I want to know what process caused unexpected shutdown. The following log files make me think my server is attacked. Any suggestions and help would be appreciated. kern.log and syslog showed me same output. Nov 11 03:54:11 www kernel: [1344541.156239] [UFW BLOCK] IN=eth0 OUT= MAC= SRC=0.0.0.0 DST=224.0. 0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Nov 11 03:54:11 www kernel: [1344541.156315] [UFW BLOCK] IN=eth0 OUT= MAC= SRC=0101:080a:2334:c90 0:0100:0000:0000:0000 DST=ff02:0000:0000:0000:0000:0000:0000:0001 LEN=72 TC=0 HOPLIMIT=1 FLOWLBL=0 PROTO=ICMPv6 TYPE=130 CODE=0 /nginx/access.log showed me 119.235.237.17 - - [11/Nov/2012:03:45:29 +0900] "GET /blog HTTP/1.1" 200 30493 "-" "Yeti/1.0 (NHN Corp.; http://help.naver.com/robots/)" my-server-ip - - [11/Nov/2012:11:05:30 +0900] "POST /wp-cron.php?doing_wp_cron=13 HTTP/1.0" 499 0 "-" "WordPress/3.4.2; http://mywebsite.com" Server turned on in here. 119.235.237.16 - - [11/Nov/2012:11:05:30 +0900] "GET /blog HTTP/1.1" 200 32935 "-" "Yeti/1.0 (NHN Corp.; http://help.naver.com/robots/)"

    Read the article

  • Ransomware: Why This New Malware is So Dangerous and How to Protect Yourself

    - by Chris Hoffman
    Ransomware is a type of malware that tries to extort money from you. One of the nastiest examples, CryptoLocker, takes your files hostage and holds them for ransom, forcing you to pay hundreds of dollars to regain access. Most malware is no longer created by bored teenagers looking to cause some chaos. Much of the current malware is now produced by organized crime for profit and is becoming increasingly sophisticated. How Ransomware Works Not all ransomware is identical. The key thing that makes a piece of malware “ransomware” is that it attempts to extort a direct payment from you. Some ransomware may be disguised. It may function as “scareware,” displaying a pop-up that says something like “Your computer is infected, purchase this product to fix the infection” or “Your computer has been used to download illegal files, pay a fine to continue using your computer.” In other situations, ransomware may be more up-front. It may hook deep into your system, displaying a message saying that it will only go away when you pay money to the ransomware’s creators. This type of malware could be bypassed via malware removal tools or just by reinstalling Windows. Unfortunately, Ransomware is becoming more and more sophisticated. One of the latest examples, CryptoLocker, starts encrypting your personal files as soon as it gains access to your system, preventing access to the files without knowing the encryption key. CryptoLocker then displays a message informing you that your files have been locked with encryption and that you have just a few days to pay up. If you pay them $300, they’ll hand you the encryption key and you can recover your files. CryptoLocker helpfully walks you through choosing a payment method and, after paying, the criminals seem to actually give you a key that you can use to restore your files. You can never be sure that the criminals will keep their end of the deal, of course. It’s not a good idea to pay up when you’re extorted by criminals. On the other hand, businesses that lose their only copy of business-critical data may be tempted to take the risk — and it’s hard to blame them. Protecting Your Files From Ransomware This type of malware is another good example of why backups are essential. You should regularly back up files to an external hard drive or a remote file storage server. If all your copies of your files are on your computer, malware that infects your computer could encrypt them all and restrict access — or even delete them entirely. When backing up files, be sure to back up your personal files to a location where they can’t be written to or erased. For example, place them on a removable hard drive or upload them to a remote backup service like CrashPlan that would allow you to revert to previous versions of files. Don’t just store your backups on an internal hard drive or network share you have write access to. The ransomware could encrypt the files on your connected backup drive or on your network share if you have full write access. Frequent backups are also important. You wouldn’t want to lose a week’s worth of work because you only back up your files every week. This is part of the reason why automated back-up solutions are so convenient. If your files do become locked by ransomware and you don’t have the appropriate backups, you can try recovering them with ShadowExplorer. This tool accesses “Shadow Copies,” which Windows uses for System Restore — they will often contain some personal files. How to Avoid Ransomware Aside from using a proper backup strategy, you can avoid ransomware in the same way you avoid other forms of malware. CryptoLocker has been verified to arrive through email attachments, via the Java plug-in, and installed on computers that are part of the Zeus botnet. Use a good antivirus product that will attempt to stop ransomware in its tracks. Antivirus programs are never perfect and you could be infected even if you run one, but it’s an important layer of defense. Avoid running suspicious files. Ransomware can arrive in .exe files attached to emails, from illicit websites containing pirated software, or anywhere else that malware comes from. Be alert and exercise caution over the files you download and run. Keep your software updated. Using an old version of your web browser, operating system, or a browser plugin can allow malware in through open security holes. If you have Java installed, you should probably uninstall it. For more tips, read our list of important security practices you should be following. Ransomware — CryptoLocker in particular — is brutally efficient and smart. It just wants to get down to business and take your money. Holding your files hostage is an effective way to prevent removal by antivirus programs after it’s taken root, but CryptoLocker is much less scary if you have good backups. This sort of malware demonstrates the importance of backups as well as proper security practices. Unfortunately, CryptoLocker is probably a sign of things to come — it’s the kind of malware we’ll likely be seeing more of in the future.     

    Read the article

  • SSIS Send Mail Task and ForceExecutionValue Error

    - by Kevin Shyr
    I tried to use the "ForcedExecutionValue" on several Send Mail Tasks and log the execution into a ExecValueVariable so that at the end of the package I can log into a table to say whether the data check is successful or not (by determine whether an email was sent out) I set up a Boolean variable that is accessible at the package level, then set up my Send Mail Task as the screenshot below with Boolean as my ForcedExecutionValueType.  When I run the package, I got the error described below. Just to make sure this is not another issue of SSIS having with Boolean type ( you also can't set variable value from xp_cmdshell of type Boolean), I used variables of types String, Int32, DateTime with the corresponding ForcedExecutionValueType.  The only way to get around this error, was to set my variable to type Object, but then when you try to get the value out later, the Object is null. I didn't spend enough time on this to see whether it's really a bug in SSIS or not, or is this just how Send Mail Task works.  Just want to log the error and will circle back on this later to narrow down the issue some more.  In the meantime, please share if you have run into the same problem.  The current workaround is to attach a script task at the end. Also, need to note 2 existing limitation: Data check needs to be done serially because every check needs to be inner join to a master table.  The master table has all the data in a single XML column and hence need to be retrieved with XQuery (a fundamental design flaw that needs to be changed) The next iteration will be to change this design into a FOR loop and pull out the checking query from a table somewhere with all the info needed for email task, but is being put to the back of the priority. Error Message: Error: 0xC001F009 at CountCheckBetweenODSAndCleanSchema: The type of the value being assigned to variable "User::WasErrorEmailEverSent" differs from the current variable type. Variables may not change type during execution. Variable types are strict, except for variables of type Object. Error: 0xC0019001 at Send Mail Task on count mismatch: The wrapper was unable to set the value of the variable specified in the ExecutionValueVariable property.   Screenshot of my Send Mail Task setup:

    Read the article

  • How do I disable nginx sending messages to syslog?

    - by altman
    My nginx sends lots of messages to syslog, but I don't need them. In my nginx.conf: error_log /var/log/nginx-error.log notice; ...... server { access_log off; location / { .... } } but, in my /var/log/message you see Nov 22 23:25:09 cache3 nginx: 2011/11/22 23:25:09 [error] 3437#0: *32172530 kevent() reported about an closed connection (60: Operation timed out) while reading response header from upstream, client: , server: , request: "GET http://www.igoido012.com//vk HTTP/1.1", upstream: "http:////vk", host: "www.igoido012.com", referrer: "http://www.baidu.com/" Nov 22 23:25:09 cache3 nginx: 2011/11/22 23:25:09 [error] 3437#0: *32099531 upstream timed out (60: Operation timed out) while reading response header from upstream, client: , server: , request: "GET http://t.web2.qq.com/channel/poll?msg_id=0&clientid=431509&t=1321975433305 HTTP/1.1", upstream: "http://:80/channel/poll?msg_id=0&clientid=431509&t=1321975433305", host: "t.web2.qq.com", referrer: "http://t.web2.qq.com/proxy.html?v=20110331001" How can I prevent nginx sending messages to my syslog?

    Read the article

  • Most Innovative IDM Projects: Awards at OpenWorld

    - by Tanu Sood
    On Tuesday at Oracle OpenWorld 2012, Oracle recognized the winners of Innovation Awards 2012 at a ceremony presided over by Hasan Rizvi, Executive Vice President at Oracle. Oracle Fusion Middleware Innovation Awards recognize customers for achieving significant business value through innovative uses of Oracle Fusion Middleware offerings. Winners are selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. This year’s Award honors customers for their cutting-edge solutions driving business innovation and IT modernization using Oracle Fusion Middleware. The program has grown over the past 6 years, receiving a record number of nominations from customers around the globe. The winners were selected by a panel of judges that ranked each nomination across multiple different scoring categories. Congratulations to both Avea and ETS for winning this year’s Innovation Award for Identity Management. Identity Management Innovation Award 2012 Winner – Avea Company: Founded in 2004, AveA is the sole GSM 1800 mobile operator of Turkey and has reached a nationwide customer base of 12.8 million as of the end of 2011 Region: Turkey (EMEA) Products: Oracle Identity Manager, Oracle Identity Analytics, Oracle Access Management Suite Business Drivers: ·         To manage the agility and scale required for GSM Operations and enable call center efficiency by enabling agents to change their identity profiles (accounts and entitlements) rapidly based on call load. ·         Enhance user productivity and call center efficiency with self service password resets ·         Enforce compliance and audit reporting ·         Seamless identity management between AveA and parent company Turk Telecom Innovation and Results: ·         One of the first Sun2Oracle identity management migrations designed for high performance provisioning and trusted reconciliation built with connectors developed on the ICF architecture that provides custom user interfaces for  dynamic and rapid management of roles and entitlements along with entitlement level attestation using closed loop remediation between Oracle Identity Manager and Oracle Identity Analytics. ·         Dramatic reduction in identity administration and call center password reset tasks leading to 20% reduction in administration costs and 95% reduction in password related calls. ·         Enhanced user productivity by up to 25% to date ·         Enforced enterprise security and reduced risk ·         Cost-effective compliance management ·         Looking to seamlessly integrate with parent and sister companies’ infrastructure securely. Identity Management Innovation Award 2012 Winner – Education Testing Service (ETS)       See last year's winners here --Company: ETS is a private nonprofit organization devoted to educational measurement and research, primarily through testing. Region: U.S.A (North America) Products: Oracle Access Manager, Oracle Identity Federation, Oracle Identity Manager Business Drivers: ETS develops and administers more than 50 million achievement and admissions tests each year in more than 180 countries, at more than 9,000 locations worldwide.  As the business becomes more globally based, having a robust solution to security and user management issues becomes paramount. The organizations was looking for: ·         Simplified user experience for over 3000 company users and more than 6 million dynamic student and staff population ·         Infrastructure and administration cost reduction ·         Managing security risk by controlling 3rd party access to ETS systems ·         Enforce compliance and manage audit reporting ·         Automate on-boarding and decommissioning of user account to improve security, reduce administration costs and enhance user productivity ·         Improve user experience with simplified sign-on and user self service Innovation and Results: 1.    Manage Risk ·         Centralized system to control user access ·         Provided secure way of accessing service providers' application using federated SSO. ·         Provides reporting capability for auditing, governance and compliance. 2.    Improve efficiency ·         Real-Time provisioning to target systems ·         Centralized provisioning system for user management and access controls. ·         Enabling user self services. 3.    Reduce cost ·         Re-using common shared services for provisioning, SSO, Access by application reducing development cost and time. ·         Reducing infrastructure and maintenance cost by decommissioning legacy/redundant IDM services. ·         Reducing time and effort to implement security functionality in business applications (“onboard” instead of new development). ETS was able to fold in new and evolving requirement in addition to the initial stated goals realizing quick ROI and successfully meeting business objectives. Congratulations to the winners once again. We will be sure to bring you more from these Innovation Award winners over the next few months.

    Read the article

  • Space-efficient data structures for broad-phase collision detection

    - by Marian Ivanov
    As far as I know, these are three types of data structures that can be used for collision detection broadphase: Unsorted arrays: Check every object againist every object - O(n^2) time; O(log n) space. It's so slow, it's useless if n isn't really small. for (i=1;i<objects;i++){ for(j=0;j<i;j++) narrowPhase(i,j); }; Sorted arrays: Sort the objects, so that you get O(n^(2-1/k)) for k dimensions O(n^1.5) for 2d and O(n^1.67) for 3d and O(n) space. Assuming the space is 2D and sortedArray is sorted so that if the object begins in sortedArray[i] and another object ends at sortedArray[i-1]; they don't collide Heaps of stacks: Divide the objects between a heap of stacks, so that you only have to check the bucket, its children and its parents - O(n log n) time, but O(n^2) space. This is probably the most frequently used approach. Is there a way of having O(n log n) time with less space? When is it more efficient to use sorted arrays over heaps and vice versa?

    Read the article

  • SQL server Rebuild Index

    - by Uday
    How can we know that before rebuilding index --How much space is required for the Transaction Log file( I knew we may required to consider sort_tempdb option , if we set to ON then we may required to ensure about tempdb space as well , Also if we set off then sorting, temporary indexes(during Build phase of rebuild index) creation will takes place in same Database.)?. Usually I have checked with Many users they say :Log file size =1.5 * Index size. How much space required for the Filegroup for datafiles-for ex-Consider I have one filegroup with 1 Mdf + ndf files. I have MSDN Link :those are pretty good information about per-requisites before rebuild index Link :http://msdn.microsoft.com/en-us/library/ms191183.aspx How can I tell exactly or Approx... to get Log/Primary FG size(or any other filegroup).

    Read the article

  • Nginx Restart Issues

    - by heavymark
    All of the sudden when restarting Nginx I get the following error: Restarting nginx: [alert]: could not open error log file: open() "/var/log/nginx/error.log" failed (13: Permission denied) 2011/02/16 17:20:58 [warn] 23925#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1 the configuration file /etc/nginx/nginx.conf syntax is ok 2011/02/16 17:20:58 [emerg] 23925#0: open() "/var/run/nginx.pid" failed (13: Permission denied) configuration file /etc/nginx/nginx.conf test failed On the front end part of the site loads but some files such as the CSS in particular are not loading. They exist on the server but when loading the resources directly in Chrome they say "Oops this page can't be found." I set a special group and user to run my apache files using suexec for my domain files. I think the nginx are owned by root however which I'm assuming is the problem but which nginx file ownerships would I change?

    Read the article

  • Multiple vulnerabilities in Thunderbird

    - by RitwikGhoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-1948 Denial of service (DoS) vulnerability 9.3 Thunderbird Solaris 10 SPARC: 145200-12 X86: 145201-12 CVE-2012-1950 Address spoofing vulnerability 6.4 CVE-2012-1951 Resource Management Errors vulnerability 10.0 CVE-2012-1952 Resource Management Errors vulnerability 9.3 CVE-2012-1953 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 9.3 CVE-2012-1954 Resource Management Errors vulnerability 10.0 CVE-2012-1955 Address spoofing vulnerability 6.8 CVE-2012-1957 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-1958 Resource Management Errors vulnerability 9.3 CVE-2012-1959 Permissions, Privileges, and Access Controls vulnerability 5.0 CVE-2012-1961 Improper Input Validation vulnerability 4.3 CVE-2012-1962 Resource Management Errors vulnerability 10.0 CVE-2012-1963 Permissions, Privileges, and Access Controls vulnerability 4.3 CVE-2012-1964 Clickjacking vulnerability 4.0 CVE-2012-1965 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-1966 Permissions, Privileges, and Access Controls vulnerability 4.3 CVE-2012-1967 Arbitrary code execution vulnerability 10.0 CVE-2012-1970 Denial of service (DoS) vulnerability 10.0 CVE-2012-1973 Resource Management Errors vulnerability 10.0 CVE-2012-3966 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • X Session from Mac

    - by tekknolagi
    How can I log into an X server from Mac OS X? I know that ssh -X username@host will log me in and I will have the capability to run X applications. On Cygwin/X you can log in and have a whole X session from your computer... and it will look something like this: How can I replicate this? Using this batch script: @echo off SET DISPLAY=127.0.0.1:0.0 SET REMOTE_HOST=%1 IF "%REMOTE_HOST%" == "" SET REMOTE_HOST=10.0.0.1 SET CYGWIN_ROOT=\cygwin SET RUN=%CYGWIN_ROOT%\bin\run -p /usr/bin SET PATH=.;%CYGWIN_ROOT%\bin;%PATH% SET XAPPLRESDIR= SET XCMSDB= SET XKEYSYMDB= SET XNLSPATH= if not exist %CYGWIN_ROOT%\tmp\.X11-unix\X0 goto CLEANUP-FINISH attrib -s %CYGWIN_ROOT%\tmp\.X11-unix\X0 del %CYGWIN_ROOT%\tmp\.X11-unix\X0 :CLEANUP-FINISH if exist %CYGWIN_ROOT%\tmp\.X11-unix rmdir %CYGWIN_ROOT%\tmp\.X11-unix if "%OS%" == "Windows_NT" goto OS_NT echo startxdmcp.bat - Starting on Windows 95/98/Me goto STARTUP :OS_NT REM Windows NT/2000/XP echo startxdmcp.bat - Starting on Windows NT/2000/XP :STARTUP %RUN% XWin -query tekknolagi.dyndns.org -clipboard -lesspointer -scrollbars -screen 0 1050x1655@2 -screen 1 1680x985@1

    Read the article

  • Problems, connecting Android ICS to Ubuntu using MTP

    - by ubuntico
    I've followed this tutorial from this blog which very clearly explains how to connect Android phone with ICS to Ubuntu so that one can access phone's sdcard (MTP access). I passed all the procedure with no errors, I can event attach my mobile to ubuntu via mtpfs -o allow_other ~/Android/GalaxyS2 and disconnect via fusermount -u ~/Android/GalaxyS2 The problem comes when I try to access mounted directory. If I try to do it via Nautilus, the system tries to open the folder for a couple of minutes and then, I either see the error, or the folder disappears from Nautilus (it comes back when I disconnect the path). I also get a console error: fuse: bad mount point `~/Android/GalaxyS2': Transport endpoint is not connected I see many people on the net reporting this error, but noone offers any solution to it. I use Ubuntu 11.10 with Gnome Shell (Gnome 3) and the mobile is Samsung Galaxy S II. I am in the fuse list, I did all the steps in the tutorial for dozens of times, all in vain.

    Read the article

  • Permission forbidden on localhost with apache2

    - by N Alex
    Here is what I am trying to do. I tried to add another folder to apache and I get the following error when trying to acces testing/index.html. The idea is that I would like to have for every customer a folder like /home/neagoe/Work/InterWebs/Projects/[PROJECT NAME]/CustomerProjects/website/dist. Forbidden You don't have permission to access /index.html on this server. Apache/2.2.22 (Ubuntu) Server at testing Port 80 Here are the steps that I followed: Step1: sudo chmod a+x /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist Step2: sudo chown -R www-data:www-data /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist sudo chmod -R 775 /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist Step3: sudo adduser $USER www-data Step4: sudo a2enmod userdir Step5: sudo cp /etc/apache/sites-available/default /etc/apache/sites-available/testing I edited the file /etc/apache/sites-available/testing so it looks like this: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName testing DocumentRoot /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist/ > Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Step6: I edited hosts ("/etc/hosts") so it looks like this: 127.0.0.1 localhost 127.0.0.1 testing # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters Step7: sudo a2ensite testing sudo service apache2 restart I searched for about 2 hours on the internet but I can't figure out what went wrong. All the pages that I found following the same steps as described above. I know there are similar questions here on the internet, but the answer is to change permission to the directory which I did on Step2. I am sorry if this is really a duplicate but I could't find the right answer. Thank you! PS. I asked this also on AskUbuntu but didn't get any answers so I'm trying my luck here. Edit: There isn't much on the error log or the access log. On the access.log: ::1 - - [10/Aug/2013:11:23:28 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:29 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:31 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:32 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:33 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:34 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:35 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" 127.0.0.1 - - [10/Aug/2013:11:23:23 +0300] "POST /wordpress-testing/wp-cron.php?doing_wp_cron=1376123003.7026669979095458984375 HTTP/1.0" 200 705 "-" "WordPress/3.6; http://localhost/wordpress-testing" ::1 - - [10/Aug/2013:11:23:36 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:37 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:38 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" 127.0.0.1 - - [10/Aug/2013:11:31:32 +0300] "GET /index.html HTTP/1.1" 200 485 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:23.0) Gecko/20100101 Firefox/23.0" And the last line repeats for about 200 rows. On the error.log: 1. This lines repeat from time to time. PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525 /msql.so' - /usr/lib/php5/20100525/msql.so: cannot open shared object file: No such file or directory in Unknown on line 0 [Sat Aug 10 13:06:42 2013] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.9-4ubuntu2.2 configured -- resuming normal operations [Sat Aug 10 13:07:36 2013] [notice] caught SIGTERM, shutting down PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/msql.so' - /usr/lib/php5/20100525/msql.so: cannot open shared object file: No such file or directory in Unknown on line 0 [Sat Aug 10 13:07:37 2013] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.9-4ubuntu2.2 configured -- resuming normal operations 2. And this is the predominant error. (hundreds of lines) [Sat Aug 10 13:07:40 2013] [error] [client 127.0.0.1] (13)Permission denied: access to /index.html denied

    Read the article

  • Issues with web hosting at home

    - by hari
    I want to host a small personal website at home. One basic problem I am hitting is, From inside home network, I cannot access my domain name. I have to use the local ip (something like 192.168.1.4) to access the website. This ip is the desktop which is hosting the website. Because of this mapping, I have issues setting up a simple wordpress blog on it too. How do I get past this issue? edit:0 when I try to access www.example.com (my domain) from within my home network, I get redirected to my router login. PS: 1) I am using dyndns service to map my non-static ip to my domain name. 2) My portforwarding works fine.

    Read the article

  • Systemctl hangs trying to start MySQL on Fedora

    - by Cerin
    When I try to start Mysql on Fedora via systemctl start mysqld.service, it hangs indefinitely and never starts. Running mysqld_safe --skip-grant-tables & or mysqld_safe --nowatch --basedir=/usr starts the server just fine, indicating the database is still there, but using service or systemctl doesn't work at all. Nothing is shown in /var/log/mysqld.log. However, `/var/log/messages shows thousands of messages like: Oct 29 15:55:52 myserver systemd[1]: mysqld.service holdoff time over, scheduling restart. Oct 29 15:55:52 myserver systemd[1]: Job pending for unit, delaying automatic restart. How do I diagnose what's wrong and get MySQL to start?

    Read the article

  • How to add timestamp to the logfilename with the apache log4j

    - by swati
    I am new to using apache logger . I have downloaded the log4j-xx and i have the following text configuration file # Set root logger level to DEBUG and its only appender to mainFormat. log4j.rootLogger = TRACE, mainFormat, FILE # mainFormat is set to be a ConsoleAppender. log4j.appender.mainFormat=org.apache.log4j.ConsoleAppender # mainFormat uses PatternLayout. log4j.appender.mainFormat.layout=org.apache.log4j.PatternLayout log4j.appender.mainFormat.layout.ConversionPattern=%d [%t] %-5p %c - %m%n #File makes a file of the output. log4j.appender.FILE=org.apache.log4j.FileAppender log4j.appender.FILE.File=log4j_HAPR001_OutputFile.log log4j.appender.FILE.layout=org.apache.log4j.PatternLayout log4j.appender.FILE.layout.ConversionPattern=%d [%t] %-5p %c - %m%n i use the above config file to create the log file. Now i wanted to add the current time stamp to the log file. Is there any way to do this. If yes can some one please give me the instructions how to do. Thanks in advance. Regards, Swati

    Read the article

  • How to sync calendar with android without google?

    - by YSN
    Hi folks, is there a way to sync an Ubuntu calendar application like Thunderbird Lightning or Evolution with an Android device without using google-calendar? At the moment I am syncing my Thunderbird-Lightning calendars on different computers via Dropbox, what is much more reliable than google-calendar. Another big advantage over google-calendar is, that I can access my appointments offline as well, since the calendar files are synced onto the harddrive of each computer by Dropbox. I'd like to access those calendars via my android device as well. The Dropbox-app for android does not support automatic syncing yet, so it seems like I have to use another service. Apart from that I guess I need to know an android app, that can access a calendar-file stored in ics-format. Thanks in advance YSN

    Read the article

  • I am trying to install Kubuntu, but I get a metalink error

    - by Brook Bentley
    It looks like the ISO metalink is broken for the Kubuntu install from Wubi. Can you please fix this? Or, help me figure out what I'm doing wrong. I receive the following error: 'An error occurred: Cannot download the metalink and therefore the ISO For more information, please see the log file: c:\users\bbentley\appdata\local\temp\wubi-12.04-rev269.log' The log file contains the following errors: '08-30 14:28 DEBUG TaskList: ### Running get_metalink... 08-30 14:28 DEBUG downloader: downloading http://releases.ubuntu.com/kubuntu/12.04/kubuntu-12.04-desktop-amd64.metalink > C:\ubuntu\install 08-30 14:28 ERROR CommonBackend: Cannot download metalink file err=[Errno 14] HTTP Error 404: Not Found 08-30 14:28 DEBUG downloader: downloading http://cdimage.ubuntu.com/kubuntu/daily-live/current/precise-desktop-amd64.metalink > C:\ubuntu\install 08-30 14:28 ERROR CommonBackend: Cannot download metalink file2 err=[Errno 14] HTTP Error 404: Not Found 08-30 14:28 DEBUG TaskList: ### Finished get_metalink 08-30 14:28 ERROR TaskList: Cannot download the metalink and therefore the ISO Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\backend.py", line 595, in get_iso File "\lib\wubi\backends\common\backend.py", line 406, in download_iso Exception: Cannot download the metalink and therefore the ISO 08-30 14:28 DEBUG TaskList: # Cancelling tasklist 08-30 14:28 DEBUG TaskList: # Finished tasklist 08-30 14:28 ERROR root: Cannot download the metalink and therefore the ISO Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 132, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\backend.py", line 595, in get_iso File "\lib\wubi\backends\common\backend.py", line 406, in download_iso Exception: Cannot download the metalink and therefore the ISO'

    Read the article

  • Sending UDP/514 data magically appears in syslog without rsyslog running

    - by ale
    I’m using a programming language without a library to log to rsyslog over UDP. I thought I was going to need to write a library but I discovered something weird. If I send data on UDP/514 with the port open on the server then the data appears in the server’s syslog. rsyslogd isn’t running so syslog isn’t doing this. Data doesn’t get formatted into a syslog message so rsyslogd really isn’t doing this (only raw text enters syslog). Linux must see the data coming in on this port and know that it should go into /var/log/messages? If I do the same on another port (e.g. UDP/515) then nothing appears in the log! What is doing this? Some CentOS feature? The kernel?

    Read the article

  • AWS EC2: How to determine whether my EC2/scalr AMI was hacked? What to do to secure it?

    - by Niro
    I received notification from Amazon that my instance tried to hack another server. there was no additional information besides log dump: Original report: Destination IPs: Destination Ports: Destination URLs: Abuse Time: Sun May 16 10:13:00 UTC 2010 NTP: N Log Extract: External 184.xxx.yyy.zzz, 11.842.000 packets/300s (39.473 packets/s), 5 flows/300s (0 flows/s), 0,320 GByte/300s (8 MBit/s) (184.xxx.yyy.zzz is my instance ip) How can I tell whether someone has penetrated my instance? What are the steps I should take to make sure my instance is clean and safe to use? Is there some intrusion detection techinque or log that I can use? Any information is highly appreciated.

    Read the article

< Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >