Search Results

Search found 59787 results on 2392 pages for 'data loss prevention'.

Page 152/2392 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • large product image structured data and visibility

    - by Mark Resølved
    On an eCommerce site we two images for a product. One medium sized shown on top of the page and one large photo shown on click in an overlay. We use http://schema.org/Product microdata on the page. We'd like the large, initially hidden, photo to be the main image for the product, as it's the better looking one. So it's also referenced in the XML sitemap as <image:image>. So we also put the itemprop"image" attribute on the, hidden large image. But i'm wondering is it a bad idea to use a microdata attribute on a hidden style="display:none;" element? is there a better way to embed the main image in terms of SEO, without showing it initially?

    Read the article

  • Checking form input data on submit with pure PHP [migrated]

    - by Leron
    I have some experience with PHP but I have never even try to do this wit pure PHP, but now a friend of mine asked me to help him with this task so I sat down and write some code. What I'm asking is for opinion if this is the right way to do this when you want to use only PHP and is there anything I can change to make the code better. Besides that I think the code is working at least with the few test I made with it. Here it is: <?php session_start(); // define variables and initialize with empty values $name = $address = $email = ""; $nameErr = $addrErr = $emailErr = ""; $_SESSION["name"] = $_SESSION["address"] = $_SESSION["email"] = ""; $_SESSION["first_page"] = false; if ($_SERVER["REQUEST_METHOD"] == "POST") { if (empty($_POST["name"])) { $nameErr = "Missing"; } else { $_SESSION["name"] = $_POST["name"]; $name = $_POST["name"]; } if (empty($_POST["address"])) { $addrErr = "Missing"; } else { $_SESSION["address"] = $_POST["address"]; $address = $_POST["address"]; } if (empty($_POST["email"])) { $emailErr = "Missing"; } else { $_SESSION["email"] = $_POST["email"]; $email = $_POST["email"]; } } if ($_SESSION["name"] != "" && $_SESSION["address"] != "" && $_SESSION["email"] != "") { $_SESSION["first_page"] = true; header('Location: http://localhost/formProcessing2.php'); //echo $_SESSION["name"]. " " .$_SESSION["address"]. " " .$_SESSION["email"]; } ?> <DCTYPE! html> <head> <style> .error { color: #FF0000; } </style> </head> <body> <form method="POST" action="<?php echo htmlspecialchars($_SERVER["PHP_SELF"]);?>"> Name <input type="text" name="name" value="<?php echo htmlspecialchars($name);?>"> <span class="error"><?php echo $nameErr;?></span> <br /> Address <input type="text" name="address" value="<?php echo htmlspecialchars($address);?>"> <span class="error"><?php echo $addrErr;?></span> <br /> Email <input type="text" name="email" value="<?php echo htmlspecialchars($email);?>"> <span class="error"><?php echo $emailErr;?></span> <br /> <input type="submit" name="submit" value="Submit"> </form> </body> </html>

    Read the article

  • What are the most known arbitrary precision arithmetic implementation approaches?

    - by keykeeper
    I'm going to write a class library for .NET which provide an implementation of arbitrary precision arithmetic for integer, rational and maybe complex numbers. What best known approaches should I become familiar with? I tried to start with Knuth's TAOCP Vol.2 (Seminumerical Algorithms, Chapter 4 – Arithmetic) but it's too complicated. At least I couldn't get the ideas in a relatively short period of time.

    Read the article

  • Advice for migrating email server

    - by Chris Adams
    Hi there, I'm planning to migrate a Zimbra server with about 200gb of data from a server hosted in an office into a datacentre, to increase uptime (we've had a couple of outages when our network here started flaking out, and we have people in other countries relying on this server too). However, I'm not sure how best to migrate the data into the data centre without rendering the connection unusable during office hours, because there's far too much to send in over night over the two meg upstream connection we have here. I'm familiar with using tools like nice to stop a long running process degrading machine performance - is there a simple way to throttle a connection between office hours, so the long running transfer doesn't block the pipe, but then opens up outside of office hours to make the most of the bandwidth? I'm aware the alternative here is to simply mail a hard drive to the data centre, but I'd like to avoid doing that if I could. We're using Centos Linux for our servers, in the office and the datacentre, so extra points for an open source linux answer.

    Read the article

  • PHPForm Data Generate PDF & Send to Email?

    - by tom
    I'm a beginner in PHP I was wondering if this is easy to do or if i'd have to outsource this to a programmer - Basically when a user fills in the PHP Form and submits it I need this to generate as a PDF (with all the labels etc) which will then email/attach to MY email and NOT the user who submitted this form. I have looked at tcpdf, fpdi but i dont think any of those scripts allow me to do this specifically as from what i heard it generates a download link for the user, and that is not what i need. If anyone can help me it would be greatly appreciated. Regards Tom

    Read the article

  • Tracking down Data Execution

    - by Agnel Kurian
    I have some malware infecting one of our machines at home. It first showed up as winulty.exe. After investigating, I am of the opinion that winulty.exe itself is an uninfected file but is being modified after it has loaded into memory. Turning on Data Execution Prevention for all processes and services has confirmed this to be true. How do I track down the process responsible for this? I've used File Monitor from sysinternals.com to monitor winulty.exe and see this being accessed by the svchost.exe instance hosting most of the system services and also by dfrgntfs.exe. How do I know which service or which DLL has been infected?

    Read the article

  • Should you always pass the bare minimum data needed into a function

    - by Anders Holmström
    Let's say I have a function IsAdmin that checks whether a user is an admin. Let's also say that the admin checking is done by matching user id, name and password against some sort of rule (not important). In my head there are then two possible function signatures for this: public bool IsAdmin(User user); public bool IsAdmin(int id, string name, string password); I most often go for the second type of signature, thinking that: The function signature gives the reader a lot more info The logic contained inside the function doesn't have to know about the User class It usually results in slightly less code inside the function However I sometimes question this approach, and also realize that at some point it would become unwieldy. If for example a function would map between ten different object fields into a resulting bool I would obviously send in the entire object. But apart from a stark example like that I can't see a reason to pass in the actual object. I would appreciate any arguments for either style, as well as any general observations you might offer. I program in both object oriented and functional styles, so the question should be seen as regarding any and all idioms.

    Read the article

  • Content, MetaData and Taxonomy 2 Overview of the Data Layer

    This article is cross-posted from my personal blog. In DotNetNuke version 5.3, we introduced the concept of a centralized Content store, together with the ability to apply Taxonomies (categories) to the content. We have extended this in DNN 5.4 by completing the MetaData API as well as adding Folksonomy (user tags). In this series of blogs I will explain how developers can take advantage of these new features in their own extensions. In the first blog in this series I covered the Taxonomy Manager...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Sync Banshee library data.

    - by Dom
    I use Banshee to organise my music, I particularly like its scoring system and I have smart playlists based on it. However, I have two versions of my music library, one on each of my computers. As one of the computers is small I only have a favourite set of songs on that computer rather than my whole collection. The computers are not on a local network, but I do use Ubuntu One for file sharing between them. Is there any way I can synchronise song data (play count, score, skip count ...) and playlist data (including smart playlists that include songs based on this data) between the two computers? This would only be relevant of course for the songs that exist on both computers, the songs that exist on only one would need to be ignored. I did consider putting the library data file (I think it is .xml but I'm not sure) into the shared file and creating a symbolic link to it, but then I wouldn't be able to have a different set of songs on each computer. Thank you.

    Read the article

  • recover from sudo rm -rf command

    - by user106116
    By mistake, I ended up executing "rm -rf /" command from sudo on my laptop which erased many files before it stopped. Now when I restarted my system , it gives a GRUB rescue prompt. I am having dual boot with Ubuntu 12.04 and Windows 7 I request the help for following: How do I fix the currently installed Ubuntu without overwriting/erasing the left over files (from rm -rf command)? Is using Boot-Repair safe ? Is there a way to directly go to Windows 7?

    Read the article

  • Is it Considered Good SQL practice to use GUID to link multiple tables to same Id field?

    - by Mallow
    I want to link several tables to a many-to-many(m2m) table. One table would be called location and this table would always be on one side of the m2m table. But I will have a list of several tables for example: Cards Photographs Illustrations Vectors Would using GUID's between these tables to link it to a single column in another table be considered 'Good Practice'? Will Mysql let me to have it automatically cascade updates and delete? If so, would multiple cascades lead to an issues? UPDATE I've read that GUID (a hex number) Generally takes up more space in a database and slows queries down. However I could still generate 'unique' ids by just having the table initial's as part of the id so that the table card's id would be c0001, and then Illustrations be I001. Regardless of this change, the questions still stands.

    Read the article

  • Repair ext4 filesystem on USB drive

    - by phineas
    Yet another filesystem question. I wanted to use a USB drive that I hadn't mounted for a month or so and was surprised by the fact Ubuntu was unable to mount it. I looked it up in the disk utility and it said it discovered a device with 17 MB instead of 2 GB. The hardware looks intact, I hope for the best for repairing the ext4 filesystem. I followed the instructions from HOWTO: Repair a broken Ext4 Superblock in Ubuntu, but I wasn't successful. # fsck.ext4 -v /dev/sdb e2fsck 1.42.5 (29-Jul-2012) ext2fs_open2: Bad magic number in super-block fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/sdb The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 Filesystem blocks are invalid, however when I run the recommended solution to try the alternate superblock, I get the following output: # e2fsck -b 8193 /dev/sdb e2fsck 1.42.5 (29-Jul-2012) e2fsck: Invalid argument while trying to open /dev/sdb plus the same error message as in the last paragraph above. Any ideas how to recover the drive? Thank you very much! Edit: testdisk won't help. I'm still stunned why the tools only discover 17 MB.

    Read the article

  • How to name setter that does data conversion?

    - by IAdapter
    I'm struggling with how to name this method, I don't like the "set" prefix, because I feel it should be reserved for normal "dumb" setters and some tools might not like it (i did not check it in checkstyle, pmd, etc., but I got a feeling they won't like it.) for example (in java, but I feel its language agnostic) public void setActionListenerClicked(boolean actionListenerClicked) { this.actionListenerClicked = actionListenerClicked ? "1" : "0"; } The only purpose of this method is ONLY to set, this method is needed and cannot be joined with any other (because of framework used). P.S. I DO know that question is similar to How to name multi-setter?, but I feel its not the same question.

    Read the article

  • backup and file server for 50+ TB of data

    - by a-bomb
    our office wants to build a new server to handle our data, over the last 10 years our data was stored on CDs, DVDs, HDDs but now they want all of it in one place that is attached to the network for everybody in the office to access it. the data is 20TB new data and the rest is old, the important now is to store these 20tb and gradually store the other 30tb over time. so what is the best solution to do ? we thought of getting an hp server and connect it to an external enclosure that either had tape drives or HDDs (we haven;t decided yet) or to get a NAS server and connect it to the hp server. what should we do because this is new for us ...

    Read the article

  • Greatly Enhanced LINQ Capabilities in Devart ADO.NET Data Providers

    Devart has recently announced the release of dotConnect products for Oracle, MySQL, PostgreSQL, and SQLite - ADO.NET providers that offer Entity Framework support, LINQ to SQL support, and contain an ORM model designer for developing LINQ to SQL and EF models based on different database engines. New dotConnect ADO.NET Providers offer advanced LinqConnect ORM solution (formerly known as Devart LINQ support) closely compatible with Microsoft LINQ to SQL and having its own advanced features. Devart...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Q: MySQL Cluster - Data insertion in NDBCLUSTER table - error out after 5 million rows

    - by Mata
    MysqlCluster version: mysql-5.6.11 ndb-7.3.2 Insertload = 50 M dataset Datanodes = 3 LOAD DATA INFILE '/input_50m/Table_1_sorted.csv' IGNORE INTO TABLE nw_ndb FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' We recently setup a new mySQL cluster and trying to load data from a flat file. But getting error “Got temporary error 4010 'Node failure caused abort of transaction' from NDBCLUSTER" when inserting 5 million rows in a single table in MySQL Cluster. We are using "LOAD DATA INFILE" command to load the data in the table from csv file. Server (musqld, ndb nodes) has good hardware: 126 GB RAM, 32 Gb allocated to mysqld tried below settings with no effect: SET autocommit=0; SET FOREIGN_KEY_CHECKS=0; SET unique_checks=0; SET GLOBAL ndb_batch_size=8*1024*1024; SET GLOBAL ndb_cache_check_time = 1000; SET GLOBAL ndb_index_stat_cache_entries = 10000000; SET SESSION BULK_INSERT_BUFFER_SIZE=256217728; SET GLOBAL KEY_BUFFER_SIZE=256217728; Any clues?

    Read the article

  • Most efficient Implementation a Tree in C++

    - by Topo
    I need to write a tree where each element may have any number of child elements, and because of this each branch of the tree may have any length. The tree is only going to receive elements at first and then it is going to use exclusively for iterating though it's branches in no specific order. The tree will have several million elements and must be fast but also memory efficient. My plan makes a node class to store the elements and the pointers to its children. When the tree is fully constructed, it would be transformed it to an array or something faster and if possible, loaded to the processor's cache. Construction and the search on the tree are two different problems. Can I focus on how to solve each problem on the best way individually? The construction of has to be as fast as possible but it can use memory as it pleases. Then the transformation into a format that give us speed when iterating the tree's branches. This should preferably be an array to avoid going back and forth from RAM to cache in each element of the tree. So the real question is which is the structure to implement a tree to maximize insert speed, how can I transform it to a structure that gives me the best speed and memory?

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >