Search Results

Search found 59880 results on 2396 pages for 'data recovery'.

Page 173/2396 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • Cron is running but not outputting data

    - by Youri
    I'm trying to make my Amazon EC2 instances stop and start by a crontab. EC2 Api tools is succesfully installed. Manually it works. The cron (which I put in with the command crontab -e): 10 * * * * ubuntu /usr/bin/ec2-stop-instances [instanceid] /tmp/ec2.log The file /tmp/ec2.log is created. When I use the command grep CRON /var/log/syslog I see the cron has actually run. I don't get any output in the /tmp/ec2.log file though. I have set all the amazon variables needed. Even if I on purpose create a wrong cron, like this: 10 * * * * ubuntu /usr/bin/ec2-stop-instancwweqes [instanceid] /tmp/ec2.log I get no output in the file. Shouldn't there be an error? I also tried not defining the user: 10 * * * * /usr/bin/ec2-stop-instances [instanceid] /tmp/ec2.log And direct command: 10 * * * * ubuntu ec2-stop-instances [instanceid] /tmp/ec2.log Can someone please help me. If I can somehow debug, I can get to the solution. Thanks in advance.

    Read the article

  • Is type safety worth the trade-offs?

    - by Prof Plum
    I began coding in in Python primarily where there is no type safety, then moved to C# and Java where there is. I found that I could work a bit more quickly and with less headaches in Python, but then again, my C# and Java apps are at much higher level of complexity so I have never given Python a true stress test I suppose. The Java and C# camps make it sound like without the type safety in place, most people would be running into all sorts of horrible bugs left an right and it would be more trouble than its worth. This is not a language comparison, so please do not address issues like compiled vs interpreted. Is type safety worth the hit to speed of development and flexibilty? WHY? to the people who wanted an example of the opinion that dynamic typing is faster: "Use a dynamically typed language during development. It gives you faster feedback, turn-around time, and development speed." - http://blog.jayway.com/2010/04/14/static-typing-is-the-root-of-all-evil/

    Read the article

  • Updates to the Demantra Partial Schema Exporter Tool, Patch 13930627, are Available.

    - by user702295
    Hello!  Updates to the Demantra Partial Schema Exporter Tool, Patch 13930627, are Available. This is an updated re-release of the generic Partial Schema Exporter Tool.  The generic patch is for 7.3.1.x and 12.2.x. TABLE_REORG was introduced in 7.3.1.3 12.2.0.  Therefore for 7.3.1.x the schema must be at 7.3.1.3 or above. This is build 3 of the patch. It contains fixes for the following bugs - BUG 17495971 - DEMANTRA 12.2 - CUMULATIVE HISTORY NOT CORRECT   It now only uses DATA_PUMP COMPRESSION only on Enterprise Edition for 11g and and up. - Bug 17452153 - 1OFF:16086475:TRYING TO FILTER DROP DOWN IN A METHOD CALL USING MORE THAN 1 ATTR   It now builds GL level filters with and without the GL id column where applicable. These bugs are also fixed in 7.3.1.6 and 12.2.3.

    Read the article

  • SEO - different data with same title and keywords

    - by Junaid Saeed
    here is my scenario i have a website where i redirect my users basing upon the device they were using, lets say a user is visiting from an iPad, i take him directly to the page of iPad wallpapers, the user selects iPad version & i take the user to the gallery of wallpapers where the user can select & download any wallpaper. Every wallpaper is the required resolution, i have my reasons for doing this, now the thing is there are diff. resolution. versions of an image appearing one 5 diff. sections of my website, each having their own view page Now there is only one record in db.table for the image, and basing on the my consistent naming convention of the images, i pick the required image. this means when 5 different pages are generated in 5 categorized sections of the website, due to a shared DB record, the keywords, the titles and every single detail of the 5 pages is same besides the resolution of the image, and the section specific details that the page has and yeah the pages also have different paths like wallpapers.com\ipad-1\cars\Ferrari-dino.html wallpapers.com\ipad-2\cars\Ferrari-dino.html wallpapers.com\ipad-3\cars\Ferrari-dino.html wallpapers.com\ipad-4\cars\Ferrari-dino.html wallpapers.com\ipad-5\cars\Ferrari-dino.html now this is my scenario, How do Search Engines see it and how do they rank it? Is it a Good or Normal or Bad SEO practice? If bad how dangerous it is for my sites SEO? i need your comments on my scenario.

    Read the article

  • large product image structured data and visibility

    - by Mark Resølved
    On an eCommerce site we two images for a product. One medium sized shown on top of the page and one large photo shown on click in an overlay. We use http://schema.org/Product microdata on the page. We'd like the large, initially hidden, photo to be the main image for the product, as it's the better looking one. So it's also referenced in the XML sitemap as <image:image>. So we also put the itemprop"image" attribute on the, hidden large image. But i'm wondering is it a bad idea to use a microdata attribute on a hidden style="display:none;" element? is there a better way to embed the main image in terms of SEO, without showing it initially?

    Read the article

  • Checking form input data on submit with pure PHP [migrated]

    - by Leron
    I have some experience with PHP but I have never even try to do this wit pure PHP, but now a friend of mine asked me to help him with this task so I sat down and write some code. What I'm asking is for opinion if this is the right way to do this when you want to use only PHP and is there anything I can change to make the code better. Besides that I think the code is working at least with the few test I made with it. Here it is: <?php session_start(); // define variables and initialize with empty values $name = $address = $email = ""; $nameErr = $addrErr = $emailErr = ""; $_SESSION["name"] = $_SESSION["address"] = $_SESSION["email"] = ""; $_SESSION["first_page"] = false; if ($_SERVER["REQUEST_METHOD"] == "POST") { if (empty($_POST["name"])) { $nameErr = "Missing"; } else { $_SESSION["name"] = $_POST["name"]; $name = $_POST["name"]; } if (empty($_POST["address"])) { $addrErr = "Missing"; } else { $_SESSION["address"] = $_POST["address"]; $address = $_POST["address"]; } if (empty($_POST["email"])) { $emailErr = "Missing"; } else { $_SESSION["email"] = $_POST["email"]; $email = $_POST["email"]; } } if ($_SESSION["name"] != "" && $_SESSION["address"] != "" && $_SESSION["email"] != "") { $_SESSION["first_page"] = true; header('Location: http://localhost/formProcessing2.php'); //echo $_SESSION["name"]. " " .$_SESSION["address"]. " " .$_SESSION["email"]; } ?> <DCTYPE! html> <head> <style> .error { color: #FF0000; } </style> </head> <body> <form method="POST" action="<?php echo htmlspecialchars($_SERVER["PHP_SELF"]);?>"> Name <input type="text" name="name" value="<?php echo htmlspecialchars($name);?>"> <span class="error"><?php echo $nameErr;?></span> <br /> Address <input type="text" name="address" value="<?php echo htmlspecialchars($address);?>"> <span class="error"><?php echo $addrErr;?></span> <br /> Email <input type="text" name="email" value="<?php echo htmlspecialchars($email);?>"> <span class="error"><?php echo $emailErr;?></span> <br /> <input type="submit" name="submit" value="Submit"> </form> </body> </html>

    Read the article

  • PHPForm Data Generate PDF & Send to Email?

    - by tom
    I'm a beginner in PHP I was wondering if this is easy to do or if i'd have to outsource this to a programmer - Basically when a user fills in the PHP Form and submits it I need this to generate as a PDF (with all the labels etc) which will then email/attach to MY email and NOT the user who submitted this form. I have looked at tcpdf, fpdi but i dont think any of those scripts allow me to do this specifically as from what i heard it generates a download link for the user, and that is not what i need. If anyone can help me it would be greatly appreciated. Regards Tom

    Read the article

  • Advice for migrating email server

    - by Chris Adams
    Hi there, I'm planning to migrate a Zimbra server with about 200gb of data from a server hosted in an office into a datacentre, to increase uptime (we've had a couple of outages when our network here started flaking out, and we have people in other countries relying on this server too). However, I'm not sure how best to migrate the data into the data centre without rendering the connection unusable during office hours, because there's far too much to send in over night over the two meg upstream connection we have here. I'm familiar with using tools like nice to stop a long running process degrading machine performance - is there a simple way to throttle a connection between office hours, so the long running transfer doesn't block the pipe, but then opens up outside of office hours to make the most of the bandwidth? I'm aware the alternative here is to simply mail a hard drive to the data centre, but I'd like to avoid doing that if I could. We're using Centos Linux for our servers, in the office and the datacentre, so extra points for an open source linux answer.

    Read the article

  • What are the most known arbitrary precision arithmetic implementation approaches?

    - by keykeeper
    I'm going to write a class library for .NET which provide an implementation of arbitrary precision arithmetic for integer, rational and maybe complex numbers. What best known approaches should I become familiar with? I tried to start with Knuth's TAOCP Vol.2 (Seminumerical Algorithms, Chapter 4 – Arithmetic) but it's too complicated. At least I couldn't get the ideas in a relatively short period of time.

    Read the article

  • Should you always pass the bare minimum data needed into a function

    - by Anders Holmström
    Let's say I have a function IsAdmin that checks whether a user is an admin. Let's also say that the admin checking is done by matching user id, name and password against some sort of rule (not important). In my head there are then two possible function signatures for this: public bool IsAdmin(User user); public bool IsAdmin(int id, string name, string password); I most often go for the second type of signature, thinking that: The function signature gives the reader a lot more info The logic contained inside the function doesn't have to know about the User class It usually results in slightly less code inside the function However I sometimes question this approach, and also realize that at some point it would become unwieldy. If for example a function would map between ten different object fields into a resulting bool I would obviously send in the entire object. But apart from a stark example like that I can't see a reason to pass in the actual object. I would appreciate any arguments for either style, as well as any general observations you might offer. I program in both object oriented and functional styles, so the question should be seen as regarding any and all idioms.

    Read the article

  • Content, MetaData and Taxonomy 2 Overview of the Data Layer

    This article is cross-posted from my personal blog. In DotNetNuke version 5.3, we introduced the concept of a centralized Content store, together with the ability to apply Taxonomies (categories) to the content. We have extended this in DNN 5.4 by completing the MetaData API as well as adding Folksonomy (user tags). In this series of blogs I will explain how developers can take advantage of these new features in their own extensions. In the first blog in this series I covered the Taxonomy Manager...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Sync Banshee library data.

    - by Dom
    I use Banshee to organise my music, I particularly like its scoring system and I have smart playlists based on it. However, I have two versions of my music library, one on each of my computers. As one of the computers is small I only have a favourite set of songs on that computer rather than my whole collection. The computers are not on a local network, but I do use Ubuntu One for file sharing between them. Is there any way I can synchronise song data (play count, score, skip count ...) and playlist data (including smart playlists that include songs based on this data) between the two computers? This would only be relevant of course for the songs that exist on both computers, the songs that exist on only one would need to be ignored. I did consider putting the library data file (I think it is .xml but I'm not sure) into the shared file and creating a symbolic link to it, but then I wouldn't be able to have a different set of songs on each computer. Thank you.

    Read the article

  • Is it Considered Good SQL practice to use GUID to link multiple tables to same Id field?

    - by Mallow
    I want to link several tables to a many-to-many(m2m) table. One table would be called location and this table would always be on one side of the m2m table. But I will have a list of several tables for example: Cards Photographs Illustrations Vectors Would using GUID's between these tables to link it to a single column in another table be considered 'Good Practice'? Will Mysql let me to have it automatically cascade updates and delete? If so, would multiple cascades lead to an issues? UPDATE I've read that GUID (a hex number) Generally takes up more space in a database and slows queries down. However I could still generate 'unique' ids by just having the table initial's as part of the id so that the table card's id would be c0001, and then Illustrations be I001. Regardless of this change, the questions still stands.

    Read the article

  • How to name setter that does data conversion?

    - by IAdapter
    I'm struggling with how to name this method, I don't like the "set" prefix, because I feel it should be reserved for normal "dumb" setters and some tools might not like it (i did not check it in checkstyle, pmd, etc., but I got a feeling they won't like it.) for example (in java, but I feel its language agnostic) public void setActionListenerClicked(boolean actionListenerClicked) { this.actionListenerClicked = actionListenerClicked ? "1" : "0"; } The only purpose of this method is ONLY to set, this method is needed and cannot be joined with any other (because of framework used). P.S. I DO know that question is similar to How to name multi-setter?, but I feel its not the same question.

    Read the article

  • backup and file server for 50+ TB of data

    - by a-bomb
    our office wants to build a new server to handle our data, over the last 10 years our data was stored on CDs, DVDs, HDDs but now they want all of it in one place that is attached to the network for everybody in the office to access it. the data is 20TB new data and the rest is old, the important now is to store these 20tb and gradually store the other 30tb over time. so what is the best solution to do ? we thought of getting an hp server and connect it to an external enclosure that either had tape drives or HDDs (we haven;t decided yet) or to get a NAS server and connect it to the hp server. what should we do because this is new for us ...

    Read the article

  • Greatly Enhanced LINQ Capabilities in Devart ADO.NET Data Providers

    Devart has recently announced the release of dotConnect products for Oracle, MySQL, PostgreSQL, and SQLite - ADO.NET providers that offer Entity Framework support, LINQ to SQL support, and contain an ORM model designer for developing LINQ to SQL and EF models based on different database engines. New dotConnect ADO.NET Providers offer advanced LinqConnect ORM solution (formerly known as Devart LINQ support) closely compatible with Microsoft LINQ to SQL and having its own advanced features. Devart...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Q: MySQL Cluster - Data insertion in NDBCLUSTER table - error out after 5 million rows

    - by Mata
    MysqlCluster version: mysql-5.6.11 ndb-7.3.2 Insertload = 50 M dataset Datanodes = 3 LOAD DATA INFILE '/input_50m/Table_1_sorted.csv' IGNORE INTO TABLE nw_ndb FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' We recently setup a new mySQL cluster and trying to load data from a flat file. But getting error “Got temporary error 4010 'Node failure caused abort of transaction' from NDBCLUSTER" when inserting 5 million rows in a single table in MySQL Cluster. We are using "LOAD DATA INFILE" command to load the data in the table from csv file. Server (musqld, ndb nodes) has good hardware: 126 GB RAM, 32 Gb allocated to mysqld tried below settings with no effect: SET autocommit=0; SET FOREIGN_KEY_CHECKS=0; SET unique_checks=0; SET GLOBAL ndb_batch_size=8*1024*1024; SET GLOBAL ndb_cache_check_time = 1000; SET GLOBAL ndb_index_stat_cache_entries = 10000000; SET SESSION BULK_INSERT_BUFFER_SIZE=256217728; SET GLOBAL KEY_BUFFER_SIZE=256217728; Any clues?

    Read the article

  • Most efficient Implementation a Tree in C++

    - by Topo
    I need to write a tree where each element may have any number of child elements, and because of this each branch of the tree may have any length. The tree is only going to receive elements at first and then it is going to use exclusively for iterating though it's branches in no specific order. The tree will have several million elements and must be fast but also memory efficient. My plan makes a node class to store the elements and the pointers to its children. When the tree is fully constructed, it would be transformed it to an array or something faster and if possible, loaded to the processor's cache. Construction and the search on the tree are two different problems. Can I focus on how to solve each problem on the best way individually? The construction of has to be as fast as possible but it can use memory as it pleases. Then the transformation into a format that give us speed when iterating the tree's branches. This should preferably be an array to avoid going back and forth from RAM to cache in each element of the tree. So the real question is which is the structure to implement a tree to maximize insert speed, how can I transform it to a structure that gives me the best speed and memory?

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >