Search Results

Search found 19881 results on 796 pages for 'log analysis'.

Page 265/796 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • task scheduler - run interactively as any user with admin credentials

    - by Force Flow
    I'm trying to deploy a scheduled task with a GPO. The task is set to run at login and executes a batch file, which then executes an EXE file. However, I also need it to be interactive and run with admin privledges to bypass the UAC prompt for a username and password when the exe file runs. I created the task for "Vista and later". I've tried running the task as mydoman\administrator and as NT AUTHORITY\Authenticated users with "run only when user is logged in" and "run with highest privledges" selected. If I log in as anyone but administrator, the task does run in the background, as I can see the cmd.exe process running in task manager as mydomain\administrator. Only if I log in as administrator do I then see the cmd window with the batch script running. How can I get the cmd window to display no matter which user logs in?

    Read the article

  • Nvidia GeForce Gt-520M-cn on intel dh61ww Ubuntu 12.04

    - by j goseeped
    hi people i hope you can help a little bit , i appreciate your time look: i have a this desktop i7 2600, 8gb ram ddr3, board intel dh61ww, Geforce Nvidia GT520-cn 2Gb ddr3, i just install ubuntu 64bits 12.04 kernel 3.2.0-23-generic , i want to setup two monitors samsung led 22" and get start mi video card 1) i download and installed nvidia driver 295.59 and also try with 302.17 to apt-update and upgrade, apt-get install build-essential linux-headers-$(uname -r), apt-get remove --purge nvidia*, apt-get remove --purge xserver-xorg-video-nouveau, vim /etc/modprobe.d/blacklist.conf blacklist vga16fb blacklist nouveau blacklist rivafb blacklist nvidiafb blacklist rivatv sh NVIDIA.run, sudo service lightdm start, reboot, nvidia-xorgconf 2)after reboot i get 800x600 and nvidia-settings say this. You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run nvidia-xconfig as root), and restart the X server. 3) i change a little bit xorg.conf to set up a resolution to work property 4) i dont have any image in the monito and i dont have any option on Nvidia X server settings lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 520] (rev a1) egrep -i 'glx|nvidia' /var/log/Xorg.0.log [ 12.005] (II) LoadModule: "glx" [ 12.005] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 12.575] (II) Module glx: vendor="NVIDIA Corporation" [ 12.585] (II) NVIDIA GLX Module 302.17 Tue Jun 12 16:22:45 PDT 2012 [ 12.585] (II) Loading extension GLX [ 13.037] (EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found) [ 13.044] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=3 (/dev/input/event10) [ 13.044] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=7 (/dev/input/event9) glxinfo | grep direct Xlib: extension "GLX" missing on display ":0.0". Error: couldn't find RGB GLX visual or fbconfig sorry my english is no very well. and thanks guys

    Read the article

  • Displaying Datamatrix in application error screen

    - by DaveNay
    Quite often we will get a report from a user in the field saying there was an error in our application. Frequently this leads to the typical round of "What was the error?" "I don't know, it was just an error." We of course log these faults to the log files, and we can even enable detailed debug logs, but this involves the end user changing a setting in the configuration file and then finding the correct files and then emailing them to us. As I'm sure you can all imagine, there are plenty of pitfalls and alligators in this methodology. Recently a couple of people have used their cell phone to email me a "screen capture" of the fault, and while this helps, we still have to scrutinize the image to find the exact fault, and if enabled, the stack trace. So this evening, I had the brilliant idea (IMHO) to encode the fault into a Datamatrix barcode image and then encourage users to send me a picture from their cell phone. I can then decode the datamatrix and get a parse-able error message! Our core technology is machine vision, so the decoding of the datamatrix image would be trivial, I just need to find a method of generating the actual image to display in the fault handler. Thoughts?

    Read the article

  • How to debug old initd script under systemd?

    - by Gene Vincent
    I have an older initd script to start my application. It worked fine under older versions of SuSE, but fails on Open SuSE 12.3. The strange thing is cd /etc/init.d ; ./script start works fine. /etc/init.d/script start shows a redirection to systemctl, but doesn't start my application (and also doesn't show any output from the initd script). I don't see any log entries showing me what goes wrong. The only entry I see is in /var/log/messages saying the application was started. How do I debug this ?

    Read the article

  • rsyslog forward all except ldap

    - by Brian
    I have Centos 6 servers running openLDAP. In the rsyslog.conf, I forward the logs to my central server with this line: *.* @10.10.10.10:514 openldap seems incredibly chatty. I have 3 servers in a multi-master cluster. Those 3 servers generate twice as many logs as my other 80 servers combined. I have been unsuccessful in figuring out how to tell openLDAP to use a sensible log level. (we never specifically set the log level) Since these are my main authentication sources, I'm a bit hesitant to "play around" with them. Is there a way to tell rsyslog to forward everything EXCEPT LOCAL4?

    Read the article

  • External Full HD monitor and Virtual Desktop Size

    - by Stefan
    I have two FullHD monitors attached to my ATI graphics card [2]. The resolution of both of them is detected properly without any modifications to /etc/X11/xorg.conf. I can run both of them in clone mode. However, when I try to run them next to each other, I got the following error: The selected configuration for displays could not be applied. If tried to fix this according to [1]. My xorg.conf now looks like this: Section "Module" Load "glx" EndSection Section "Screen" Identifier "Default Screen" DefaultDepth 24 SubSection "Display" # The 1088 is the smallest multiple of 32 >= 1088 # see manpages Virtual 1920 1088 EndSubSection EndSection This does not seem to be parsed properly. After restarting X, I cannot set resolutions beyond 1600 or so any more. /var/log/Xorg.0.log gives: [ 15.676] (II) fglrx(0): Not using mode "1920x1080" (width too large for virtual size) [ 15.676] (II) fglrx(0): Not using mode "1680x1050" (width too large for virtual size) Are my modifications syntactically incorrect? According to the man page, it should be fine. Any ideas? OS: Ubuntu 11.10 64bit [1] http://askubuntu.com/a/75546/5023 [2] 01:00.0 VGA compatible controller: ATI Technologies Inc Juniper [Radeon HD 5700 Series]

    Read the article

  • Reliable access to Internet but not local network (not DNS or proxy issues)

    - by Ian Goldby
    I'm looking for help with a Vista Home Premium laptop that has trouble accessing any resource on our home network, but accesses the Internet just fine. The set-up is this: The Vista laptop and a MacBook Pro connect wirelessly to the router-modem. A Synology DS212j NAS drive has a wired connection to the router-modem. Devices on the local network are always referred to by IP address, so this cannot be a DNS issue. The MacBook Pro connects reliably to the NA via AFP (network shared folders), SMB (network shared folders) and HTTP. The Vista laptop connects to and browses sites on the Internet without any problems. It can log into the NAS via SMB and list the shared folders (so there is nothing wrong with the log-in credentials), but when it tries to open any of the folders Explorer just hangs with the spinning cursor for several minutes and then says "\192.168.1.64\shared\Photos is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions. The specified network name is no longer available." It can ping the NAS successfully. If I try to open the NAS drive's web interface, the browser just hangs. This is the same with IE, Firefox and Chrome. (There is no proxy.) I can log into the NAS drive with FTP and navigate directories, but when I try to list the contents of a directory with more than a handful of entries, the ftp client hangs. I set up a website on the MacBook. The Vista laptop was able to load some of the pages, but loading any of the images was very hit and miss. Images embedded in HTML pages never worked no matter how many times I reloaded the page, but when I linked directly to the image it did load (though several attempts were sometimes needed). I tried all of this with the Windows Firewall turned off, and with AVG turned off. That made no difference. I'd really appreciate any suggestions anyone can make. The fact that the Vista laptop has trouble with HTTP and FTP as well as SMB connections suggests to me that this is a problem at the TCP level or below. But don't forget it accesses sites outside the LAN with no problems.

    Read the article

  • Latest EMEA Partner Success Stories (21st November)

    - by swalker
    Be Recognised Through Partner Success Stories You can showcase your capabilities in Oracle products and industries through partner success stories that are published on Oracle.com, and benefit from the traffic on our portal. To participate, you are required to complete a form and inform us of a successfully implemented project. If your story is selected, we will contact you for an interview. Click here to access the form and submit your success story. The latest customer success snapshots with partners being uploaded on to Oracle.com are: Robur S.p.A. Telenor LinkPlus A.S. Urals Power Engineering Company Bochemie Group Mediaset S.p.A Landbrokes Coeclerici S.p.A. IDS GmbH -Analysis and Reporting Services Teatre Nacional de Catalunya LinkPlus A.S. Scottish Water LH Dienstbekleidungs GmbH Champion Europe SpA  Metropolitan Housing Partnership McKesson Robur Learn more about our partner and customer successes by browsing the many EMEA Success Stories across all industries here.

    Read the article

  • Is a yobibit really a meaningful unit? [closed]

    - by Joe
    Wikipedia helpfully explains: The yobibit is a multiple of the bit, a unit of digital information storage, prefixed by the standards-based multiplier yobi (symbol Yi), a binary prefix meaning 2^80. The unit symbol of the yobibit is Yibit or Yib.1[2] 1 yobibit = 2^80 bits = 1208925819614629174706176 bits = 1024 zebibits[3] The zebi and yobi prefixes were originally not part of the system of binary prefixes, but were added by the International Electrotechnical Commission in August 2005.[4] Now, what in the world actually takes up 1,208,925,819,614,629,174,706,176 bits? The information content of the known universe? I guess this is forward thinking -- maybe astrophyics or nanotech, or even DNA analysis really will require these orders of magnitude. How far off do you think all this is? Are these really meaningful units?

    Read the article

  • A way to auto cycle (close) through all screen sessions

    - by JBWhitmore
    I frequently use screen when I log into the interactive nodes to a supercomputer that I have access to -- and I often run things and move on. There are about 20 separate nodes that I can log into; and if I check any one of them I'll have something like 4 detached sessions. Each of those sessions will have maybe 5 screen sessions within that. Is there a quick way to cycle through all of these and close them down if they are not running any processes? My current process is to screen -ls and then screen -r #### then type exit until I'm back to the base screen.

    Read the article

  • Can I optimize this mod_wsgi / apache file better?

    - by tomwolber
    I am new to Django/Python/ mod_wsgi, and I was wondering if I could optimize this file to reduce memory usage: ServerRoot "/home/<foo>/webapps/django_wsgi/apache2" LoadModule dir_module modules/mod_dir.so LoadModule env_module modules/mod_env.so LoadModule log_config_module modules/mod_log_config.so LoadModule mime_module modules/mod_mime.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule wsgi_module modules/mod_wsgi.so LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined CustomLog /home/<foo>/logs/user/access_django_wsgi.log combined ErrorLog /home/<foo>/logs/user/error_django_wsgi.log KeepAlive Off Listen 12345 MaxSpareThreads 3 MinSpareThreads 1 MaxClients 5 MaxRequestsPerChild 300 ServerLimit 4 HostnameLookups Off SetEnvIf X-Forwarded-SSL on HTTPS=1 ThreadsPerChild 5 WSGIDaemonProcess django_wsgi processes=5 python-path=/home/<foo>/webapps/django_wsgi:/home/<foo>/webapps/django_wsgi/lib/python2.6 threads=1 WSGIPythonPath /home/<foo>/webapps/django_wsgi:/home/<foo>/webapps/django_wsgi/lib/python2.6 WSGIScriptAlias /auctions /home/<foo>/webapps/django_wsgi/auctions.wsgi WSGIScriptAlias /achievers /home/<foo>/webapps/django_wsgi/achievers.wsgi

    Read the article

  • Preventing - Large Number of Failed Login Attempts from IP

    - by Silver89
    I'm running a CentOS 6.3 server and currently receive emails entitled "Large Number of Failed Login Attempts from IP" from my server every 15 minutes or so. Surely with the below configured it should mean only the person using the (my static ip) should be able to even try and log in? If that's the case where are these remote unknown users trying to log into which is generating these emails? Current Security Steps: root login is only allowed without-password StrictModes yes SSH password login is disabled - PasswordAuthentication no SSH public keys are used SSH port has been changed to a number greater than 40k cPHulk is configured and running Logins limited to specific ip address cPanel and WHM limited to my static ip only hosts.allow sshd: (my static ip) vsftpd: (my static ip) whostmgrd: (my static ip) hosts.deny ALL : ALL

    Read the article

  • Doesn't work Nginx + SSI [migrated]

    - by boopidoopi
    I have some problems. Nginx doesn't work with SSI. Nginx listens 80 port (frontend), apache2 listens 81 port (backend). That is my nginx configurations: server { listen 80; server_name test.dev www.test.dev; error_log /var/log/nginx/error.log debug; log_subrequest on; location / { ssi on; proxy_pass http://localhost:81; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 15m; client_body_buffer_size 128k; } } SSI include in test.dev index.php: <!--# include virtual="http:test.dev/test.html" -- When I open test.dev/index.php I see clean page. In page source: <!--# include virtual="http:test.dev/test.html" -- So how to enable SSI in nginx? Can you help me?

    Read the article

  • Disable Thunderbird "failed to connect" notifications

    - by The Electric Muffin
    My network manager doesn't start until I log in, so for the first ten seconds or so after I login I have no Internet connection. The problem is that I have Thunderbird set to automatically start on login, so it helpfully tells me it "failed to connect" every time I log in. Also, my Internet connection isn't too reliable, so I sometimes get those messages even when I'm supposedly connected. Is there any way to disable these notifications, while still allowing the ones about new mail? Computer info: Thunderbird 15.0 $ uname -a Linux [HOSTNAME REDACTED] 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux Probably irrelevant: NetworkManager Version 0.9.0.1 (nm09 20120407) KDE Platform Version 4.8.4 (4.8.4)

    Read the article

  • "Unknown user name or bad password" when I launch ADUC

    - by Chris
    When I open up Active Directory Users and Computers from my workstation, I receive an error: Naming information cannot be located because: Logon failure: unknown user name or bad password. Contact your system administrator to verify that your domain is properly configured and is currently online. If I log in to my workstation as somebody else, it works. If I log into a different workstation using my account, it works. All the workstations in question are running Windows Vista (32 and 64 bit) or Windows Server 2008. The domain controller in question is running Windows Small Business Server 2008. Everything else (that I tried) in the Remote Server Administration Tools runs just fine. Any thoughts? Edit: I just tried reinstalling RSAT. No such luck.

    Read the article

  • Random Server shutdown? - CentOS

    - by Kevin Hammett
    My system was working fine, and then it just had a random restart. Anyone have any idea of the problem? The message log: Jul 6 22:56:34 909I7 shutdown[719711]: shutting down for system halt Jul 6 22:56:34 909I7 init: Switching to runlevel: 0 Jul 6 22:56:35 909I7 smartd[10743]: smartd received signal 15: Terminated Jul 6 22:56:35 909I7 smartd[10743]: smartd is exiting (exit status 0) Jul 6 22:56:42 909I7 hcid[8749]: Got disconnected from the system message bus Jul 6 22:56:42 909I7 auditd[8430]: The audit daemon is exiting. Jul 6 22:56:42 909I7 kernel: audit(1341640602.922:344412): audit_pid=0 old=8430 by auid$ Jul 6 22:56:43 909I7 pcscd: pcscdaemon.c:572:signal_trap() Preparing for suicide Jul 6 22:56:43 909I7 pcscd: hotplug_libusb.c:376:HPRescanUsbBus() Hotplug stopped Jul 6 22:56:44 909I7 pcscd: readerfactory.c:1379:RFCleanupReaders() entering cleaning f$ Jul 6 22:56:44 909I7 pcscd: pcscdaemon.c:532:at_exit() cleaning /var/run Jul 6 22:56:44 909I7 kernel: Kernel logging (proc) stopped. Jul 6 22:56:44 909I7 kernel: Kernel log daemon terminating. Jul 6 22:56:45 909I7 exiting on signal 15

    Read the article

  • Displaying Datamatrix in application error screen

    - by DaveNay
    Quite often we will get a report from a user in the field saying there was an error in our application. Frequently this leads to the typical round of "What was the error?" "I don't know, it was just an error." We of course log these faults to the log files, and we can even enable detailed debug logs, but this involves the end user changing a setting in the configuration file and then finding the correct files and then emailing them to us. As I'm sure you can all imagine, there are plenty of pitfalls and alligators in this methodology. Recently a couple of people have used their cell phone to email me a "screen capture" of the fault, and while this helps, we still have to scrutinize the image to find the exact fault, and if enabled, the stack trace. So this evening, I had the brilliant idea (IMHO) to encode the fault into a Datamatrix barcode image and then encourage users to send me a picture from their cell phone. I can then decode the datamatrix and get a parse-able error message! Our core technology is machine vision, so the decoding of the datamatrix image would be trivial, I just need to find a method of generating the actual image to display in the fault handler. Thoughts?

    Read the article

  • Exchange IMAP4 connector - Error Event ID 2006

    - by MikeB
    A couple of users in my organisation use IMAP4 to connect to Exchange 2007 (Update rollup 9 applied) because they prefer Thunderbird / Postbox clients. One of the users is generating errors in the Application Log as follows: An exception Microsoft.Exchange.Data.Storage.ConversionFailedException occurred while converting message Imap4Message 1523, user "*******", folder *********, subject: "******", date: "*******" into MIME format. Microsoft.Exchange.Data.Storage.ConversionFailedException: Message content has become corrupted. ---> System.ArgumentException: Value should be a valid content type in the form 'token/token' Parameter name: value at Microsoft.Exchange.Data.Mime.ContentTypeHeader.set_Value(String value) at Microsoft.Exchange.Data.Storage.MimeStreamWriter.WriteHeader(HeaderId type, String data) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) --- End of inner exception stack trace --- at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeAttachment(MimePartInfo part, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeParts(List`1 parts, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.<>c__DisplayClass2.<WriteMimePart>b__0() at Microsoft.Exchange.Data.Storage.ConvertUtils.CallCts(Trace tracer, String methodName, String exceptionString, CtsCall ctsCall) at Microsoft.Exchange.Data.Storage.ImapItemConverter.WriteMimePart(ItemToMimeConverter converter, MimeStreamWriter writer, OutboundConversionOptions options, MimePartInfo partInfo, MimeFlags conversionFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream, UInt32[] indices) From my reading around it seems that the suggestion is to ask users to log in to Outlook / OWA and view the messages there. However, having logged in as the users myself, the messages cannot be found either through searching or by browsing the folder detailed in the log entry. The server returns the following error to the client: "The message could not be retrieved using the IMAP4 protocol. The message has not been deleted and may be accessible using either Microsoft Outlook or Microsoft Office Outlook Web Access. You can also try contacting the original sender of the message to find out about the contents of the message. Retrieval of this message will be retried when the server is updated with a fix that addresses the problem." Messages were transferred in to Exchange by copying them from the old Apple Xserve, accessed using IMAP. So my question, finally: 1. Is there any way to get the IMAP Exchange connector to rebuild its cache of messages since it doesn't seem to be pulling them directly from the MAPI store? 2. Alternatively, if there is no database, any ideas on why these messages don't appear in Outlook or OWA would be gratefully received. Many thanks, Mike

    Read the article

  • Pipe an infinite stream to internal loop?

    - by Sh3ljohn
    I've seen a lot of things about redirecting stdout to a TCP socket, but no real example of how to do it in practice, specifically when the output stream generated by the first "command" never ends. To talk about something concrete, let's take programs like servers that typically output their log endlessly to stdout (well, as long as they run). If you redirect the output to a log file on the disk, then this file is always open (therefore not readable by others?) and grows infinitely, which eventually is going to cause problems. This might be a nood question, but I don't know what it does or how to do it so. How to redirect the output of a command to the internal loop? I want to make sure that data is sent EVERY time something is written to stdout, and that the pipe won't wait for the command to end (never happens ideally!). Is that right? If 2 is true, is there a buffer system to send chunks of data once it reaches a certain size only? Could you give me concrete command line examples to do the above? Thanks in advance

    Read the article

  • Solaris 11.1 changes building of code past the point of __NORETURN

    - by alanc
    While Solaris 11.1 was under development, we started seeing some errors in the builds of the upstream X.Org git master sources, such as: "Display.c", line 65: Function has no return statement : x_io_error_handler "hostx.c", line 341: Function has no return statement : x_io_error_handler from functions that were defined to match a specific callback definition that declared them as returning an int if they did return, but these were calling exit() instead of returning so hadn't listed a return value. These had been generating warnings for years which we'd been ignoring, but X.Org has made enough progress in cleaning up code for compiler warnings and static analysis issues lately, that the community turned up the default error levels, including the gcc flag -Werror=return-type and the equivalent Solaris Studio cc flags -v -errwarn=E_FUNC_HAS_NO_RETURN_STMT, so now these became errors that stopped the build. Yet on Solaris, gcc built this code fine, while Studio errored out. Investigation showed this was due to the Solaris headers, which during Solaris 10 development added a number of annotations to the headers when gcc was being used for the amd64 kernel bringup before the Studio amd64 port was ready. Since Studio did not support the inline form of these annotations at the time, but instead used #pragma for them, the definitions were only present for gcc. To resolve this, I fixed both sides of the problem, so that it would work for building new X.Org sources on older Solaris releases or with older Studio compilers, as well as fixing the general problem before it broke more software building on Solaris. To the X.Org sources, I added the traditional Studio #pragma does_not_return to recognize that functions like exit() don't ever return, in patches such as this Xserver patch. Adding a dummy return statement was ruled out as that introduced unreachable code errors from compilers and analyzers that correctly realized you couldn't reach that code after a return statement. And on the Solaris 11.1 side, I updated the annotation definitions in <sys/ccompile.h> to enable for Studio 12.0 and later compilers the annotations already existing in a number of system headers for functions like exit() and abort(). If you look in that file you'll see the annotations we currently use, though the forms there haven't gone through review to become a Committed interface, so may change in the future. Actually getting this integrated into Solaris though took a bit more work than just editing one header file. Our ELF binary build comparison tool, wsdiff, actually showed a large number of differences in the resulting binaries due to the compiler using this information for branch prediction, code path analysis, and other possible optimizations, so after comparing enough of the disassembly output to be comfortable with the changes, we also made sure to get this in early enough in the release cycle so that it would get plenty of test exposure before the release. It also required updating quite a bit of code to avoid introducing new lint or compiler warnings or errors, and people building applications on top of Solaris 11.1 and later may need to make similar changes if they want to keep their build logs similarly clean. Previously, if you had a function that was declared with a non-void return type, lint and cc would warn if you didn't return a value, even if you called a function like exit() or panic() that ended execution. For instance: #include <stdlib.h> int callback(int status) { if (status == 0) return status; exit(status); } would previously require a never executed return 0; after the exit() to avoid lint warning "function falls off bottom without returning value". Now the compiler & lint will both issue "statement not reached" warnings for a return 0; after the final exit(), allowing (or in some cases, requiring) it to be removed. However, if there is no return statement anywhere in the function, lint will warn that you've declared a function returning a value that never does so, suggesting you can declare it as void. Unfortunately, if your function signature is required to match a certain form, such as in a callback, you not be able to do so, and will need to add a /* LINTED */ to the end of the function. If you need your code to build on both a newer and an older release, then you will either need to #ifdef these unreachable statements, or, to keep your sources common across releases, add to your sources the corresponding #pragma recognized by both current and older compiler versions, such as: #pragma does_not_return(exit) #pragma does_not_return(panic) Hopefully this little extra work is paid for by the compilers & code analyzers being able to better understand your code paths, giving you better optimizations and more accurate errors & warning messages.

    Read the article

  • Broadband Traffic Question

    - by rutherford
    I have a broadband ADSL line with plus.net in the UK. Having checked the modem there is no firewall or any weird features enabled. But since I arrived at the apartment (the broadband already being installed), I cannot log into Twitter nor update any of my wordpress blogs (I can browse them and log in, but cannot save any edits or new posts). It only seems to affect these two sites in their unique ways. If I take the netbook I use in this place out to say a McDonalds or some other wifi access point then these sites work fine again. Anyone know what could possibly be preventing access of the pages in question? The only thing common to these pages are the POST response they are expecting. But POST form submission works fine on other sites...

    Read the article

  • how can i find my usb2rs232 driver

    - by mefmef
    i have a device that is correctly connected to my PC . but i could not see it in /dev . what does it means? is it because of not installing my drive? $ /dev ls before connecting my device: agpgart mei sda1 tty28 tty59 ttyS30 autofs mem sda2 tty29 tty6 ttyS31 block net sda5 tty3 tty60 ttyS4 bsg network_latency sda6 tty30 tty61 ttyS5 btrfs-control network_throughput serial tty31 tty62 ttyS6 bus null sg0 tty32 tty63 ttyS7 char oldmem shm tty33 tty7 ttyS8 console parport0 snapshot tty34 tty8 ttyS9 core port snd tty35 tty9 ttyUSB0 cpu ppp stderr tty36 ttyprintk uinput cpu_dma_latency psaux stdin tty37 ttyS0 urandom disk ptmx stdout tty38 ttyS1 usbmon0 dri pts tty tty39 ttyS10 usbmon1 ecryptfs ram0 tty0 tty4 ttyS11 usbmon2 fb0 ram1 tty1 tty40 ttyS12 vcs fd ram10 tty10 tty41 ttyS13 vcs1 full ram11 tty11 tty42 ttyS14 vcs2 fuse ram12 tty12 tty43 ttyS15 vcs3 hidraw0 ram13 tty13 tty44 ttyS16 vcs4 hpet ram14 tty14 tty45 ttyS17 vcs5 input ram15 tty15 tty46 ttyS18 vcs6 kmsg ram2 tty16 tty47 ttyS19 vcsa log ram3 tty17 tty48 ttyS2 vcsa1 loop0 ram4 tty18 tty49 ttyS20 vcsa2 loop1 ram5 tty19 tty5 ttyS21 vcsa3 loop2 ram6 tty2 tty50 ttyS22 vcsa4 loop3 ram7 tty20 tty51 ttyS23 vcsa5 loop4 ram8 tty21 tty52 ttyS24 vcsa6 loop5 ram9 tty22 tty53 ttyS25 vga_arbiter loop6 random tty23 tty54 ttyS26 zero loop7 rfkill tty24 tty55 ttyS27 lp0 rtc tty25 tty56 ttyS28 mapper rtc0 tty26 tty57 ttyS29 mcelog sda tty27 tty58 ttyS3 $ /dev ls after connecting my device: agpgart mei sda1 tty28 tty59 ttyS30 autofs mem sda2 tty29 tty6 ttyS31 block net sda5 tty3 tty60 ttyS4 bsg network_latency sda6 tty30 tty61 ttyS5 btrfs-control network_throughput serial tty31 tty62 ttyS6 bus null sg0 tty32 tty63 ttyS7 char oldmem shm tty33 tty7 ttyS8 console parport0 snapshot tty34 tty8 ttyS9 core port snd tty35 tty9 ttyUSB0 cpu ppp stderr tty36 ttyprintk ttyUSB1 cpu_dma_latency psaux stdin tty37 ttyS0 uinput disk ptmx stdout tty38 ttyS1 urandom dri pts tty tty39 ttyS10 usbmon0 ecryptfs ram0 tty0 tty4 ttyS11 usbmon1 fb0 ram1 tty1 tty40 ttyS12 usbmon2 fd ram10 tty10 tty41 ttyS13 vcs full ram11 tty11 tty42 ttyS14 vcs1 fuse ram12 tty12 tty43 ttyS15 vcs2 hidraw0 ram13 tty13 tty44 ttyS16 vcs3 hpet ram14 tty14 tty45 ttyS17 vcs4 input ram15 tty15 tty46 ttyS18 vcs5 kmsg ram2 tty16 tty47 ttyS19 vcs6 log ram3 tty17 tty48 ttyS2 vcsa loop0 ram4 tty18 tty49 ttyS20 vcsa1 loop1 ram5 tty19 tty5 ttyS21 vcsa2 loop2 ram6 tty2 tty50 ttyS22 vcsa3 loop3 ram7 tty20 tty51 ttyS23 vcsa4 loop4 ram8 tty21 tty52 ttyS24 vcsa5 loop5 ram9 tty22 tty53 ttyS25 vcsa6 loop6 random tty23 tty54 ttyS26 vga_arbiter loop7 rfkill tty24 tty55 ttyS27 zero lp0 rtc tty25 tty56 ttyS28 mapper rtc0 tty26 tty57 ttyS29 mcelog sda tty27 tty58 ttyS3

    Read the article

  • RDP exits immediately after connecting to Windows Server 2008 R2

    - by carpat
    Background: I recently got a Windows cloud VPS server. I don't have much experience with server admin (I'm a programmer), and what little I do have is with linux servers. Ever since getting the server I've been having issues with RDP. I can connect about two or three times, after which point I can't connect until one of the tech guys "fixes" it (see below). When I connect, I can stay connected for hours with no problem. When the problem connecting starts, the first time I try to log in, the remote desktop window pops up, starts connecting, and then exits with "Your Remote Desktop session has ended". After that, for about 10-20 minutes if I try to connect again, the connections times out with Remote Desktop can't connect to the computer for one of these reasons: 1) Remote access on the server is not enabled 2) The remote computer is turned off 3) The remote computer is not available on the network then goes back to connecting once and immediately disconnecting. All of the updates are installed. The firewall has been correctly configured to let RDP traffic through. The remote setting is "Allow connections from computers running any version of Remote Desktop". I tried creating a second user, and when I can't connect, I can't connect to that user either. I've tried both soft and hard reboots, neither of which help. I've tried connecting from two different computers (both running Windows 7) from two different networks (work and home), and the behavior is the same. Everything else on the server continues to run fine (IIS-served http pages, Tomcat-served java pages, svn, ping). The "fix" that the tech guys supply is simply logging into the console on their end, after which point I can connnect 2 or 3 times again. The event viewer on the server has "authentication failure" (or something similar) events generated when I attempt to log in and can't. I can't get to the actual event at the moment as I'm currently in the can't connect stage, and waiting for the techs to log in. But when I searched for the event earlier this morning I couldn't find anything useful. Can anyone help?

    Read the article

  • Can't open an SSH session because of OpenSSL version mismatch [on hold]

    - by user3287849
    I just ran apt-get upgrade, and according to /var/log/apt/history.log, openssl has been updated to version 1.0.1e-2+rvt+deb7u7. Now I have one SSH session still open, but I can't open another one. I restarted SSH, which returned OpenSSL version mismatch. Built against 1000105f, you have 10001080. I tried apt-get remove openssl && apt-get install openssl with no luck. I'm running debian on a raspberry pi. Edit: moved to SuperUser

    Read the article

  • Nginx, logrotate and empty files

    - by tzulberti
    I have a problem with nginx/logrotate. The problems is that nginx is logging access to 2 files (main and data). I have the following contrab setting: 0 * * * * /usr/sbin/logrotate -f /home/orwell/orwell-setup/bin/logrotate-nginx And the file "logrotate-nginx" has the following content: /tmp/data.log { rotate 90 daily missingok notifempty size 1 sharedscripts postrotate [ ! -f /tmp/nginx.pid ] || kill -USR1 `cat /tmp/nginx.pid` MORE THINGS endscript } /tmp/main.log { rotate 90 daily missingok notifempty size 1 sharedscripts postrotate [ ! -f /tmp/nginx.pid ] || kill -USR1 `cat /tmp/nginx.pid` MORE THINGS endscript } The work is done in the two files, but there is a problem that nginx stops logging into those files. Both files are created, but they are empty. Any ideas why nginx stop logging info to both files?

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >