Search Results

Search found 12836 results on 514 pages for 'isp org'.

Page 274/514 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • php run function on all images from one dir in recursive mode (noob)

    - by Steve
    hey guyz i have a function $result = create_watermark( 'input_file_name' ,'output_file_name'); i have dir called /images n have 500 images in it and all images are link images_(some_unknown_numbers).png (all png) now i want run them thru function in loop and want out put like /markedimage/images_1.png images_2.png images_3.png i need help how can i run them in loop and how out put name can change want run script on Ubuntu so we can use shell too if any body want check function it is here http://paste2.org/p/789149 plz provide me code because i m newbie thanks in advance

    Read the article

  • Disaster recovery backup of files/photos for personal use

    - by Renesis
    I'm looking for the best method to store a backup of important files and 5+ years of digital photos that is safe from some type of fire/flood disaster in my home. I'm looking for: Affordable: Less than $100/yr or first-time cost. Reliable: At least a smaller chance of failing than there is of fire or flood Easy for initial backup and to add to, and at least semi-easy to recover. I recently purchased a small home safe for physical vitals. It was inexpensive, solid, and is fire/water safe. If I had a physical copy of the digital files, the safe would work fine for this, but I don't know what to store in it that adequately meets the requirements above. Hard drive - I read that the danger of it not spinning up makes a hard drive a bad choice for this type of storage, although it was my first thought and would definitely be the simplest choice - very easy to take out once a month and add files to. DVDs - Way too much of a hassle for both backup and restore. Tape - No idea on the affordability of this option Online - Given that I have at least 300GB already and ever-increasing megapixels means ever-bigger files, and my ISP upload is about 2Mb at the best, this just doesn't sound like a good option for me, but I could be convinced. Other - Have I missed something? Also, I'm already covered both for sync between computers (Dropbox) and a nightly backup of these files (External HDD). The problem with the nightly backup is obviously that it's always with the computer and in a disaster would be destroyed along with it. Is anyone else doing something similar? Is the HDD as poor of a choice as I read, or is it a feasible option? Maybe two to reduce the likelihood of failure?

    Read the article

  • JUnit Custom Rules

    - by Jon
    JUnit 4.7 introduced the concept of custom rules: http://www.infoq.com/news/2009/07/junit-4.7-rules There are a number of built in JUnit rules including TemporaryFolder which helps by clearing up folders after a test has been run: @Rule public TemporaryFolder tempFolder = new TemporaryFolder(); There's a full list of built in rules here: http://kentbeck.github.com/junit/javadoc/latest/org/junit/rules/package-summary.html I'm interested in finding out what custom rules are in place where you work or what useful custom rules you currently use?

    Read the article

  • NAT vs public IP (and blocked ports)

    - by user1646166
    I have a problem with my ISP. They say that they don't block any ports and I have public IP, while I think these both statements are false. Before I talk to them again (which is really tough when my understanding of these terms is different than theirs) I would like to make some things clear. It seems like my computer is behind NAT (is it possible to have public IP and be behind NAT at the same moment?). When I check my IP, through some external server, and type that IP into browser I get a home page of some router (not mine). Isn't that a proof that my IP isn't public? Also, I have problems with making connections via some ports. E.g. when I'm trying to connect through some high port ( 1023) via SSH, it doesn't work. Is it possible that certain range of outgoing ports from my computer are blocked? Or is it simply because that my ssh client (PuTTY) can't receive incoming packets because of blocked incoming ports? To avoid some questions: it's not a problem with my router, I tried connecting my PC directly and it also didn't work, while having connected by 3G using phone with USB tethering, it does work. Thanks!

    Read the article

  • Flex maps howto examples

    - by alessandro ferrucci
    Hello, I've stumbled upon this flash map http://www.washingtonpost.com/wp-srv/special/nation/unemployment-by-county/ it looks like they used this map to construct the website. http://commons.wikimedia.org/wiki/File:USA_Counties_with_FIPS_and_names.svg I am curious as to what people have done or any blogs that describe what can be done with flex and simple maps like this (not google maps style maps) but simple all-in-memory maps like this one. It would be cool to see what/ and how flex can do in terms of maps. thanks!

    Read the article

  • Finding a Minimum Equivalent Graph of a Digraph

    - by kohlerm
    I'm looking for an implementation preferably in Java of an algorithm for finding a Minimum Equivalent Graph of a Digraph (http://portal.acm.org/citation.cfm?id=321526.321534). Even better would be an implementation of "Approximating the minimum equivalent digraph" http://cat.inist.fr/?aModele=afficheN&cpsidt=3634076 (requires ACM membership, sorry) alternative link http://www.cs.umd.edu/~samir/grant/kry94b.ps (postscript)

    Read the article

  • php upload file function

    - by Jacksta
    I am trying to write a script which uploads a file via a html form. When I click submit nothing happens. file: upload_form.html <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> <form action="do_upload.php" method="post" enctype="multipart/form-data"></form> <p><strong>File to upload</strong></p> <p><input name="img1" type="file" size="30" /></p> <p><input name="submit" type="submit" value="Upolad File" /></p> </body> </html> file: do_upload.php <?php if ($_FILES[img1] != "" { @copy($_FILES[img1] [tm_name], "/tmp" .$_FILES[img1][name]) or die("couldnt copy the file"); } else { die("no file specified"); } ?> <HTML> <head> <title>Successfull File Upload</title> </head> <body> <h1>Success</h1> <p>You sent: <? echo $_FILES[img1][name]; ?>, a <? echo $_FILES[img1][size]; ?>byte filw with a mime type of <? echo $_FILES[img1][type]; ?></p> </body> </HTML>

    Read the article

  • Problem using date when querying the appengine datastore

    - by manu1001
    I'm running this query: SELECT FROM com.Data WHERE entryDate DATE('2010-3-16') I get this error: org.datanucleus.store.appengine.query.DatastoreQuery$UnsupportedDatastoreFeatureException: Problem with query DATE('2010-3-16'): Unsupported method while parsing expression: InvokeExpression{[null].DATE(Literal{2010-3-16})} The same query works when I use it on the admin console. But it does not work for code (java), either locally or when deployed. Any ideas?

    Read the article

  • Is this a solution for having multiple SSL certificates on the same IP

    - by Saif Bechan
    I am running CentOS running on a VPS. I read some guides on having multiple SSL certificates on the same system, but I can not get the basics to work. The guide I got that makes the most sense to me is the doing the following. In CentOS I can make virtual NIC's. So I made 2 virtual NIC's to start with. 192.168.10.1, 192.168.10.2. Now I work in ISP manager Pro, so this is listening on my primary ip 1.1.1.1 For each website I have them listening on 192.168.10.1:80, 192.168.10.1:443 In the hosts file I made the following 2 entries 192.168.10.1 1st.com 192.168.10.2 2nd.com Now the strange thing is that when I browser to 1st.com I do not get the website located at 192.168.10.1, I get the website located at my prim IP 1.1.1.1 Should I do something like forwarding or routing for this setup to work? And the basic question: Will this setup even work? Are the SSL certificates based on the IP adress, or are the based on the host name, 1st.com and 2nd.com.

    Read the article

  • Wiring my internet

    - by u8sand
    I have Verizon internet service and am currently using wifi. My router is in the basement and my desktop computer is 2 floors and on the other side of the house above it... Worst possible positioning but that's just how things worked out. My wireless currently is extremely unstable so I've decide to correct the problem by wiring my computer directly. The problem lies here: when redoing the room next to it (when the wall was open) we went ahead and wired some coaxial cable from our attic to our basement (with plenty of slack on both ends, don't ask me why we didn't go ahead and wire a CAT6 cable). The question is: Can I use the coaxial cable to bring me internet connection? Naturally the router (which needs to stay where it is) takes a coaxial cable input and has Ethernet outputs. So maybe I would have to take a ethernet cable, convert to coaxial-coaxial to my computer, convert back to ethernet. Is this even possible to convert from coaxial to ethernet? Or do I have to attempt to go ahead and fish a cat6 cable through my house. I cannot just split the signal because that would require two routers and two networks (which I don't believe would work with one cable-one ISP correct me if I'm wrong). Thanks

    Read the article

  • How to update textbox value

    - by Thomas
    I have a textbox in my View. I input a number into the textbox, and then i want the controller to multiply the number and put the result into the textbox. How can I do that? This is what i have done already. Let's start with the View: <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<dynamic>" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Index</title> </head> <body> <div> <h2>Please enter a number</h2> <% using (Html.BeginForm()) { %> <%=Html.TextBox("number")%> <input type="submit" value="Index" name ="Index" /> <% } %> </div> </body> </html> As you can see I have a simple textbox and button. This is my controller: using System.Web.Mvc; namespace MvcApplication1.Controllers { public class HomeController : Controller { // // GET: /Home/ public ActionResult Index() { return View(); } [HttpPost] public ActionResult Index(int number) { number = number * 2; ViewData["id"] = number; return View(ViewData); } } } But nothing really happens. Yeah, I see the Post is being done, and the coded steps into public ActionResult Index(int number). I see that the number is taken from the textbox, it's multiplied correctly. I've tried using ViewData as you can see. I've also used TempData. This is another code for the textbox in the View I've tried: <%=Html.TextBox("number", ViewData["number"])%> But it doesn't matter. The textbox doesn't get updated with the new value. How can I do that?

    Read the article

  • Setting Cookie Port

    - by MasterMax1313
    I'm trying to set the port on a cookie in ASP.NET (code below), but I'm getting a very unusual error at runtime (below the code). Any thoughts? target.Cookie = new Cookie { Comment = "Test Comment", CommentUri = new System.Uri("http://www.tempuri.org"), Discard = false, Domain = "tempuri.com", Expired = false, Expires = new DateTime(2015, 12, 31), HttpOnly = false, Name = "TestCookie", Path = "/", Port = "443", Secure = false, Value = "Test Value", Version = 1, }; Exception: System.Net.CookieException: The 'Port'='443' part of the cookie is invalid..

    Read the article

  • Protobuf-net Deserialize Open Street Maps

    - by jonperl
    For the life of me I cannot deserialize the protobuf file from Open Street Maps. I am trying to deserialize the following extract: http://download.geofabrik.de/osm/north-america/us-northeast.osm.pbf to get Nodes and I am using http://code.google.com/p/protobuf-net/ as the library. I have tried to deserialize a bunch of different objects but they all come up null. The proto files can be found here: http://trac.openstreetmap.org/browser/applications/utils/export/osm2pgsql/protobuf Any suggestions?

    Read the article

  • Hadoop in a RESTful Java Web Application - Conflicting URI templates

    - by user1231583
    I have a small Java Web Application in which I am using Jersey 1.12 and the Hadoop 1.0.0 JAR file (hadoop-core-1.0.0.jar). When I deploy my application to my JBoss 5.0 server, the log file records the following error: SEVERE: Conflicting URI templates. The URI template / for root resource class org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods and the URI template / transform to the same regular expression (/.*)? To make sure my code is not the problem, I have created a fresh web application that contains nothing but the Jersey and Hadoop JAR files along with a small stub. My web.xml is as follows: <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"> <servlet> <servlet-name>ServletAdaptor</servlet-name> <servlet-class>com.sun.jersey.spi.container.servlet.ServletContainer</servlet- class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>ServletAdaptor</servlet-name> <url-pattern>/mytest/*</url-pattern> </servlet-mapping> <session-config> <session-timeout> 30 </session-timeout> </session-config> <welcome-file-list> <welcome-file>index.jsp</welcome-file> </welcome-file-list> </web-app> My simple RESTful stub is as follows: import javax.ws.rs.core.Context; import javax.ws.rs.core.UriInfo; import javax.ws.rs.Path; @Path("/mytest") public class MyRest { @Context private UriInfo context; public MyRest() { } } In my regular application, when I remove the Hadoop JAR files (and the code that is using Hadoop), everything works as I would expect. The deployment is successful and the remaining RESTful services work. I have also tried the Hadoop 1.0.1 JAR files and have had the same problems with the conflicting URL template in the NamenodeWebHdfsMethods class. Any suggestions or tips in solving this problem would be greatly appreciated.

    Read the article

  • PHP script loading took over 10 seconds

    - by Misiur
    My again. I've promised to not come back today, but I've got another trouble. http://www.misiur.com/me/ - it took over 10 seconds to load. Whole site code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="pl" lang="pl"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>{site('title')}</title> <link rel="stylesheet" type="text/css" href="{site('themeDir')}/global.css" /> </head> <body> <div id="site"> <div id="footer"> <p>Site Managment System {site('version')} by <a href="http://www.misiur.com">Misiur</a>. Copyright &copy; 2010-{$currYear}.</p> </div> </div> </body> I think the trouble is in this piece of code: private function replaceFunc($subject) { foreach($this->func as $t) { preg_match_all('/\{'.$t.'\([a-zA-Z,\']+\)\}/i', $subject, $res); for($j = 0; $j < sizeof($res[0]); $j++) { preg_match('/\([a-zA-Z,\']+\)/i', $res[0][$j], $match); if($match > 0) { $prep = explode(", ", substr($match[0], 1, -1)); $args = array(); for($i = 0; $i < sizeof($prep); $i++) { $args[] = substr($prep[$i], 1, -1); } } else { $args = array(); } $subject = preg_replace('/\{'.$t.preg_quote($match[0]).'\}/i', call_user_func_array($t, $args), $subject); } } return $subject; } It has to find functions from array ($this-func), cut out arguments, and call function with them. I think that I've messed up. Help.

    Read the article

  • Validate an Xml file against a DTD with a proxy. C# 2.0

    - by Chris Dunaway
    I have looked at many examples for validating an XML file against a DTD, but have not found one that allows me to use a proxy. I have a cXml file as follows (abbreviated for display) which I wish to validate: <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE cXML SYSTEM "http://xml.cxml.org/schemas/cXML/1.2.018/InvoiceDetail.dtd"> <cXML payloadID="123456" timestamp="2009-12-10T10:05:30-06:00"> <!-- content snipped --> </cXML> I am trying to create a simple C# program to validate the xml against the DTD. I have tried code such as the following but cannot figure out how to get it to use a proxy: private static bool isValid = false; static void Main(string[] args) { try { XmlTextReader r = new XmlTextReader(args[0]); XmlReaderSettings settings = new XmlReaderSettings(); XmlDocument doc = new XmlDocument(); settings.ProhibitDtd = false; settings.ValidationType = ValidationType.DTD; settings.ValidationEventHandler += new ValidationEventHandler(v_ValidationEventHandler); XmlReader validator = XmlReader.Create(r, settings); while (validator.Read()) ; validator.Close(); // Check whether the document is valid or invalid. if (isValid) Console.WriteLine("Document is valid"); else Console.WriteLine("Document is invalid"); } catch (Exception ex) { Console.WriteLine(ex.ToString()); } } static void v_ValidationEventHandler(object sender, ValidationEventArgs e) { isValid = false; Console.WriteLine("Validation event\n" + e.Message); } The exception I receive is System.Net.WebException: The remote server returned an error: (407) Proxy Authentication Required. which occurs on the line while (validator.Read()) ; I know I can validate against a DTD locally, but I don't want to change the xml DOCTYPE since that is what the final form needs to be (this app is solely for diagnostic purposes). For more information about the cXML spec, you can go to cxml.org. I appreciate any assistance. Thanks

    Read the article

  • Parallels Plesk returning strange numbers

    - by Jack W-H
    Hi everyone, As a relatively new Server Admin I've become a bit confused by some statistics Parallels Plesk Panel 10.0.1 is returning to me. I have a domain ('subscription') set up, mysite.com. Mysite.com only hosts files, mostly images Its file contents use up about 390MB of disk space Here's a screenshot: this is what Plesk is reporting mysite.com to use: And some more info: Now this is pretty confusing... I thought at first my site might have been hacked and had contents written to disk, but I checked and all is in order, nothing has been hacked into as far as I can tell. So I had a look in the site's CP for some more in-depth statistics, and this is what's returned... Now - sod's law - when I go to check my disk space statistics in more depth via the control panel, this morning it says "The data were not collected yet." - not too sure what that means, but, last night when I checked it was reporting something odd. It said Files were using up 390MB, but 1.80GB or so were being used up by 'Mail Accounts'. This is really strange, as there are no mail accounts set up for the domain. The only hint of 'mail' there is, is the catchall set up to forward *@mysite.com to a separate, ISP-hosted email account. Any ideas anybody? I can post more details if you need it. Sorry to be a bit vague but I'm not sure what else I can post. Thanks, Jack

    Read the article

  • Open Technology: Nanorobots Answer to US Navy

    - by adrianocavalcanti
    Hi Everybody, Just wondering if some has some suggestion about open technology licensing. I have been working on nanotechnology -- here some info: * Nanorobot Technology: What to Expect from Science - A Personal Letter in Answer to United States Navy http://www.linuxquestions.org/questions/general-10/nanorobot-technology-what-to-expect-from-science-814060 and started an initiative towards open nanotechnology since last october. All comments and suggestions are highly appreciated.

    Read the article

  • Why aren't min-width and max-width working as I expect?

    - by Nathan Long
    I'm trying to adjust a CSS page layout using min-width and max-width. To simplify the problem, I made this test page. I'm trying it out in the latest versions of Firefox and Chrome with the same results. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Testing min-width and max-width</title> <style type="text/css"> div{float: left; max-width: 400px; min-width: 200px;} div.a{background: orange;} div.b{background: gray;} </style> </head> <body> <div class="a"> (Giant block of filler text here) </div> <div class="b"> (Giant block of filler text here) </div> </body> </html> Here's what I expect to happen: With the browser maximized, the divs sit side by side, each 400px wide: their maximum width Shrink the browser window, and they both shrink to 200px: their minimum width Further shrinking the browser has no effect on them Here's what actually happens, starting at step 2: Shrink the browser window, and as soon as they can't sit side-by-side at their max width, the second div drops below the first Further shrinking the browser makes them get narrower and narrower, as small as I can make the window So here's are my questions: What does max-width mean if the element will sooner hop down in the layout than go lower than its maximum width? What does min-width mean if the element will happily get narrower than that if the browser window keeps shrinking? Is there any way to achieve what I want: have these elements sit side-by-side, happily shrinking until they reach 200px each, and only then adjust the layout so that the second one drops down? And of course... What am I doing wrong?

    Read the article

  • log4j performance

    - by Bob
    Hi, I'm developing a web app, and I'd like to log some information to help me improve and observe the app. (I'm using Tomcat6) First I thought I would use StringBuilders, append the logs to them and a task would persist them into the database like every 2 minutes. Because I was worried about the out-of-the-box logging system's performance. Then I made some test. Especially with log4j. Here is my code: Main.java public static void main(String[] args) { Thread[] threads = new Thread[LoggerThread.threadsNumber]; for(int i = 0; i < LoggerThread.threadsNumber; ++i){ threads[i] = new Thread(new LoggerThread("name - " + i)); } LoggerThread.startTimestamp = System.currentTimeMillis(); for(int i = 0; i < LoggerThread.threadsNumber; ++i){ threads[i].start(); } LoggerThread.java public class LoggerThread implements Runnable{ public static int threadsNumber = 10; public static long startTimestamp; private static int counter = 0; private String name; public LoggerThread(String name) { this.name = name; } private Logger log = Logger.getLogger(this.getClass()); @Override public void run() { for(int i=0; i<10000; ++i){ log.info(name + ": " + i); if(i == 9999){ int c = increaseCounter(); if(c == threadsNumber){ System.out.println("Elapsed time: " + (System.currentTimeMillis() - startTimestamp)); } } } } private synchronized int increaseCounter(){ return ++counter; } } } log4j.properties log4j.logger.main.LoggerThread=debug, f log4j.appender.f=org.apache.log4j.RollingFileAppender log4j.appender.f.layout=org.apache.log4j.PatternLayout log4j.appender.f.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n log4j.appender.f.File=c:/logs/logging.log log4j.appender.f.MaxFileSize=15000KB log4j.appender.f.MaxBackupIndex=50 I think this is a very common configuration for log4j. First I used log4j 1.2.14 then I realized there was a newer version, so I switched to 1.2.16 Here are the figures (all in millisec) LoggerThread.threadsNumber = 10 1.2.14: 4235, 4267, 4328, 4282 1.2.16: 2780, 2781, 2797, 2781 LoggerThread.threadsNumber = 100 1.2.14: 41312, 41014, 42251 1.2.16: 25606, 25729, 25922 I think this is very fast. Don't forget that: in every cycle the run method not just log into the file, it has to concatenate strings (name + ": " + i), and check an if test (i == 9999). When threadsNumber is 10, there are 100.000 loggings and if tests and concatenations. When it is 100, there are 1.000.000 loggings and if tests and concatenations. (I've read somewhere JVM uses StringBuilder's append for concatenation, not simple concatenation). Did I missed something? Am I doing something wrong? Did I forget any factor that could decrease the performance? If these figures are correct I think, I don't have to worry about log4j's performance even if I heavily log, do I?

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >