Search Results

Search found 61631 results on 2466 pages for 'duplicate data'.

Page 805/2466 | < Previous Page | 801 802 803 804 805 806 807 808 809 810 811 812  | Next Page >

  • Rankings dropped after i changed the wordpress theme [on hold]

    - by Pramod
    Many of my blog posts (from http://techwayz.com) ranked on 1st or the 2nd page of Google search results . After i changed my WordPress theme , rankings of those posts dropped dramatically .I had accidentally kept the theme's inbuilt SEO modules enabled along with the yoast plugin . Now , i've disabled the theme's inbuilt SEO modules . When checked the html of one of my posts with the theme's inbuilt SEO modules enabled , i discovered duplicate meta description tags in which one tag had the blogs description and the other tag had the posts description.Was it the reason for the drop in rankings or there are some other reasons? Please help -Pramod

    Read the article

  • what's a good approach to working with multiple databases?

    - by Riz
    I'm working on a project that has its own database call it InternalDb, but also it queries two other databases, call them ExternalDb1 and ExternalDb2. Both ExternalDb1 and ExternalDb2 are actually required by a few other projects. I'm wondering what the best approach for dealing with this is? Currently, I've just created a project for each of these external databases and then generated Edmx and entities using the entity-framework approach. My thought was that I could then include these projects in any of my solutions that require access to these databases. Also, I don't have any separate business layers. I just have a solution like below: Project.Domain ExternalDb1Project.Domain ExternalDb2Project.Domain Project.Web So my Domain projects contain the data access as well as the POCOs generated by Entity Framework and any business logic. But I'm not sure if this is a good approach. For example if I want to do Validation in my Project.Domain on the entities in the InternalDb, it's fine. But if I want to do Validation for entities from either of the ExternalDbs, then I wonder where it should go? To be more specific, I retrieve Employees from ExternalDb1Project.Domain. However, I want to make sure they are Active. Where should this Validation go? How to architect a project like this at a high level? Also, I want to make sure that I use IoC for my data contexts so I can create Fakes when writing tests. I wonder where the interfaces for these various data contexts would reside?

    Read the article

  • How do you know when to change jobs? [closed]

    - by dustyprogrammer
    Possible Duplicate: When do you know it's time to move on from your current job? I have been working for a couple years now. I just want to know what people think about leaving one company for another, or to start looking around for other positions. I tend to use people's resumes as a guideline for when to change from one company to another. I am approaching, the time in my life where most of those people I look too, move away from their first position to pursue others. I know that isn't something good to base my decisions on what other do. I was wondering when is it time to move companies. I am currently happy at my position, and I am learning tons. Its just something I have been seeing a lot, I would like to get a feel for what people think. Thanks.

    Read the article

  • Issues With IIS Hosting Two Domains From Same Folder [closed]

    - by Bob Mc
    I have two different domain names that resolve to the same ASP.Net site. Both domains are hosted on the same server, which runs Windows Server 2003 and IIS6. The sites are differentiated in IIS Manager using host headers. However, both of the sites point to the same folder on the local drive for the site's page files. I am occasionally experiencing an ASP.Net error that says "The state information is invalid for this page and might be corrupted." I'm the site developer so I've addressed all the relevant code-related causes for this issue. However, I was wondering whether having two domains/sites sharing the same folder for an ASP.Net application might be causing this intermittent error. Also, is this generally a bad practice? Should I make separate, duplicate folders for each of the domains? Seems like that can become a maintenance headache.

    Read the article

  • Proper library for enums

    - by Bobson
    I'm trying to refactor some code such that the display is separate from the implementation, and I'm not sure where to put the existing enums. My project is currently structured as follows: Utilities RemoteData (Depends on: Utilities) LocalData (Depends on: RemoteData, Utilities) RemoteWeb (Depends on: RemoteData, Utilities) LocalWeb (Depends on: RemoteData, LocalData, Utilities) I'm now trying to add "ViewLibrary (Depends on: Utilities)" to this list, and then adding it as a new dependency to both RemoteWeb and LocalWeb. It will contain a set of interfaces which the other two projects will implement, use to populate the view, and then consume the result. There's an enum which is currently used in all the projects except Utilities. It thus lives in the RemoteData project, because everything else depends on it. But this new ViewLibrary won't depend on either data project. So how will it know about this enum? Some options I see: Create a new project just for shared enum values. Add it to Utilities, even though it is related to data. Define it a second time in ViewLibrary, and require both RemoteWeb and LocalWeb to convert the one type into the other when they access the shared views. Add a dependency on RemoteData to the ViewLibrary, even though it's supposed to be independent of data-source. Are there any better options? Is this structure flawed to begin with?

    Read the article

  • Polipo dpkg failure problem [closed]

    - by ICXC
    Possible Duplicate: polipo E: Sub-process /usr/bin/dpkg returned an error code (1) This is the error I get each time I try to install polipo with the command apt-get install polipo or when I try to install it from Ubuntu software center: Starting polipo: Couldn't open config file /etc/polipo/config: 2. invoke-rc.d: initscript polipo, action "start" failed. dpkg: error processing polipo (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: polipo Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up polipo (1.0.4.1-1.1) ... Starting polipo: Couldn't open config file /etc/polipo/config: 2. invoke-rc.d: initscript polipo, action "start" failed. dpkg: error processing polipo (--configure): subprocess installed post-installation script returned error exit status 1 How can I solve this?

    Read the article

  • Is there a White / Blank Canvas E-Commerce Platform to Integrate into Existing Site? [closed]

    - by beta208
    Possible Duplicate: Which Ecommerce Script Should I Use? Our website is built we're interested in adding a Store to the site. Essentially, there is a global header, and a global footer, and in between is a white expandable div. We'd like our store to fit between the header and footer (and preferably be 960px wide). Do you know of any store platform built to live between the header/footer for situations like this? We really want a full store, not just paypal buy buttons. We'd like it to have a shop backend (CMS-like) with full tracking, etc. Can be paid or unpaid, and preferably hosted by us, but either might be applicable (if iframe or alternative works securely?). This would need to feature over 100 items. If authorize.net is supported that is a plus.

    Read the article

  • Override a static library's global function

    - by Jason Madux
    Not sure if this question belongs here or on SO, but posting here for now :) I'm trying to override a global function defined in "StaticLib" for my iOS application but the compiler keeps giving me the duplicate symbol error. Relevant code: #import "MyApplication.h" #import "StaticLib.h" NSData* getAllData() { NSLog(@"myGetAllData"); return nil; } @implementation MyApplication ... @end I've looked into Apple's runtime library but that all seems to be class-oriented. Any suggestions or is it just not possible to override global functions from static libraries?

    Read the article

  • How can I explain object-oriented programming to someone who's only coded in Fortran 77? [closed]

    - by Zonedabone
    Possible Duplicate: How can I explain object-oriented programming to someone who’s only coded in Fortran 77? My mother did her college thesis in Fortran, and now (over a decade later) needs to learn c++ for fluids simulations. She is able to understand all of the procedural programming, but no matter how hard I try to explain objects to her, it doesn't stick. (I do a lot of work with Java, so I know how objects work) I think I might be explaining it in too high-level ways, so it isn't really making sense to someone who's never worked with them at all and grew up in the age of purely functional programming. Is there any simple way I can explain them to her that will help her understand? Thanks for the help in advance.

    Read the article

  • ecommerce options for 5-6 products [closed]

    - by user5252
    Possible Duplicate: Which Ecommerce Script Should I Use? We're looking to develop a simple e-commerce solution to sell 5-6 products. We'd rather not have to use PayPal's buttons (buy it now!) if there's an existing alternative, but would also for budget/time constraints don't want to roll our own. Are there any small, basic ecommerce solutions available that would allow this? I did look at Foxy Cart but the monthly fee was a bit of a turn off. (I must sound extremely fussy I'm aware!) Something like Zen would just be overkill for our needs. Thanks for any suggestions.

    Read the article

  • Using Queries with Coherence Read-Through Caches

    - by jpurdy
    Applications that rely on partial caches of databases, and use read-through to maintain those caches, have some trade-offs if queries are required. Coherence does not support push-down queries, so queries will apply only to data that currently exists in the cache. This is technically consistent with "read committed" semantics, but the potential absence of data may make the results so unintuitive as to be useless for most use cases (depending on how much of the database is held in cache). Alternatively, the application itself may manually "push down" queries to the database, either retrieving results equivalent to querying the cache directly, or may query the database for a key set and read the values from the cache (relying on read-through to handle any missing values). Obviously, if the result set is too large, reading through the cache may cause significant thrashing. It's also worth pointing out that if the cache is asynchronously synchronized with the database (perhaps via database change listener), that an application may commit a transaction to the database, then generate a key set from the database via a query, then read cache entries through the cache, possibly resulting in a race condition where the application sees older data than it had previously committed. In theory this is not problematic but in practice it is very unintuitive. For this reason it often makes sense to invalidate the cache when updating the database, forcing the next read-through to update the cache.

    Read the article

  • Dynamic Dispatch without Virtual Functions

    - by Kristopher Johnson
    I've got some legacy code that, instead of virtual functions, uses a kind field to do dynamic dispatch. It looks something like this: // Base struct shared by all subtypes // Plain-old data; can't use virtual functions struct POD { int kind; int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); }; enum Kind { Kind_Derived1, Kind_Derived2, Kind_Derived3 }; struct Derived1: POD { Derived1(): kind(Kind_Derived1) {} int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); // plus other type-specific data and function members }; struct Derived2: POD { Derived2(): kind(Kind_Derived2) {} int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); // plus other type-specific data and function members }; struct Derived3: POD { Derived3(): kind(Kind_Derived3) {} int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); // plus other type-specific data and function members }; and then the POD class's function members are implemented like this: int POD::GetFoo() { // Call kind-specific function switch (kind) { case Kind_Derived1: { Derived1 *pDerived1 = static_cast<Derived1*>(this); return pDerived1->GetFoo(); } case Kind_Derived2: { Derived2 *pDerived2 = static_cast<Derived2*>(this); return pDerived2->GetFoo(); } case Kind_Derived3: { Derived3 *pDerived3 = static_cast<Derived3*>(this); return pDerived3->GetFoo(); } default: throw UnknownKindException(kind, "GetFoo"); } } POD::GetBar(), POD::GetBaz(), POD::GetXyzzy(), and other members are implemented similarly. This example is simplified. The actual code has about a dozen different subtypes of POD, and a couple dozen methods. New subtypes of POD and new methods are added pretty frequently, and so every time we do that, we have to update all these switch statements. The typical way to handle this would be to declare the function members virtual in the POD class, but we can't do that because the objects reside in shared memory. There is a lot of code that depends on these structs being plain-old-data, so even if I could figure out some way to have virtual functions in shared-memory objects, I wouldn't want to do that. So, I'm looking for suggestions as to the best way to clean this up so that all the knowledge of how to call the subtype methods is centralized in one place, rather than scattered among a couple dozen switch statements in a couple dozen functions. What occurs to me is that I can create some sort of adapter class that wraps a POD and uses templates to minimize the redundancy. But before I start down that path, I'd like to know how others have dealt with this.

    Read the article

  • Software center crashing in 12.04

    - by Jordan
    i'm having issues with the Software Center. I'm trying to install Google Chrome, but can't because the software center crashes as soon as it opens. I've looked around at similar topics and found plenty but none of the suggested solutions have worked, so I wonder if this is different. When I type 'software-center' into the terminal, I get the error that other's have had: softwarecenter.backend.reviews - WARNING - Could not get usefulness from server, no username in config file I know this might seem like a duplicate, but none of the solution worked. Sorry, kind of new to Ubuntu. I use Ubuntu 12.04 lts

    Read the article

  • Emulate Historical Figures i.e. Einstein - Is this possible using linguistic logic for my http://www.ustimeline.com Education System

    - by Johnnylight
    After hearing about the success of IBM's Watson I started thinking perhaps emulating human language is now possible? My goal is to create Virtual Historical characters to represent the main characters in my Adventur-Cation The Great American Adventure program such as Einstein or Crazy Horse. The goal is to build an intelligent system capable of indexing the internet and storing the data using a schema using modern knowledge on linguistic theory (phonemes, morphemes, syntax) to build a system capable to returning a semantically sound response very similar to the response made by the same person if still alive today. The goal would be to use the same engine/system for all characters. Each characters would have their own digital representation and voice, and would organize data differently based on tags/keywords stored about the individual. Imagine a Max Headroom Einstein. Based on the success of Watson, I believe something like this may now be possible. Would be an interesting way to study history and would be a vehicle of entertainment as well. Can anyone confirm if this has already been attempted? Is anyone interested in exploring this using Cognitive Science, Psychology, Artificial Intelligence, Historical data captured on the internet, and Linguistic theory?

    Read the article

  • Can MySQL reasonably perform queries on billions of rows?

    - by haxney
    I am planning on storing scans from a mass spectrometer in a MySQL database and would like to know whether storing and analyzing this amount of data is remotely feasible. I know performance varies wildly depending on the environment, but I'm looking for the rough order of magnitude: will queries take 5 days or 5 milliseconds? Input format Each input file contains a single run of the spectrometer; each run is comprised of a set of scans, and each scan has an ordered array of datapoints. There is a bit of metadata, but the majority of the file is comprised of arrays 32- or 64-bit ints or floats. Host system |----------------+-------------------------------| | OS | Windows 2008 64-bit | | MySQL version | 5.5.24 (x86_64) | | CPU | 2x Xeon E5420 (8 cores total) | | RAM | 8GB | | SSD filesystem | 500 GiB | | HDD RAID | 12 TiB | |----------------+-------------------------------| There are some other services running on the server using negligible processor time. File statistics |------------------+--------------| | number of files | ~16,000 | | total size | 1.3 TiB | | min size | 0 bytes | | max size | 12 GiB | | mean | 800 MiB | | median | 500 MiB | | total datapoints | ~200 billion | |------------------+--------------| The total number of datapoints is a very rough estimate. Proposed schema I'm planning on doing things "right" (i.e. normalizing the data like crazy) and so would have a runs table, a spectra table with a foreign key to runs, and a datapoints table with a foreign key to spectra. The 200 Billion datapoint question I am going to be analyzing across multiple spectra and possibly even multiple runs, resulting in queries which could touch millions of rows. Assuming I index everything properly (which is a topic for another question) and am not trying to shuffle hundreds of MiB across the network, is it remotely plausible for MySQL to handle this? UPDATE: additional info The scan data will be coming from files in the XML-based mzML format. The meat of this format is in the <binaryDataArrayList> elements where the data is stored. Each scan produces = 2 <binaryDataArray> elements which, taken together, form a 2-dimensional (or more) array of the form [[123.456, 234.567, ...], ...]. These data are write-once, so update performance and transaction safety are not concerns. My naïve plan for a database schema is: runs table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | start_time | TIMESTAMP | | name | VARCHAR | |-------------+-------------| spectra table | column name | type | |----------------+-------------| | id | PRIMARY KEY | | name | VARCHAR | | index | INT | | spectrum_type | INT | | representation | INT | | run_id | FOREIGN KEY | |----------------+-------------| datapoints table | column name | type | |-------------+-------------| | id | PRIMARY KEY | | spectrum_id | FOREIGN KEY | | mz | DOUBLE | | num_counts | DOUBLE | | index | INT | |-------------+-------------| Is this reasonable?

    Read the article

  • Write file at a specific value and line

    - by user2828891
    I want to write data at a specified value in a text file from a text box. Here is a example: item_begin etcitem 3344 item_type=etcitem is first line and item_begin weapon 3343 item_type=weapon is second. Well i want to replace item_type=weapon at second line with item_type=armor. Here is code so far: var data2 = File.WriteAllLines("itemdata.txt") .Where(x => x.Contains("3343")) .Take(1) .SelectMany(x => x.Split('\t')) .Select(x => x.Split('=')) .Where(x => x.Length > 1) .ToDictionary(x => x[0].Trim(), x => x[1]); But returns error at WriteAllLines. Here is the readline part code: var data = File.ReadLines("itemdata.txt") .Where(x => x.Contains("3343")) .Take(1) .SelectMany(x => x.Split('\t')) .Select(x => x.Split('=')) .Where(x => x.Length > 1) .ToDictionary(x => x[0].Trim(), x => x[1]); //call values textitem_type.Text = data["item_type"]; And want to write the same value I change on textitem_type.Text after read. I used this to reaplace but replaces all values with same name from line and returns me in text only 1 line. Code: private void button2_Click(object sender, EventArgs e) { var data = File .ReadLines("itemdata.txt") .Where(x => x.Contains(itemSrchtxt.Text)) .Take(1) .SelectMany(x => x.Split('\t')) .Select(x => x.Split('=')) .Where(x => x.Length > 1) .ToDictionary(x => x[0].Trim(), x => x[1]); StreamReader reader = new StreamReader(Directory.GetCurrentDirectory() + @"\itemdata.txt"); string content = reader.ReadLine(); reader.Close(); content = Regex.Replace(content, data["item_type"], textitem_type.Text); StreamWriter write = new StreamWriter(Directory.GetCurrentDirectory() + @"\itemdata.txt"); write.WriteLine(content); write.Close(); }

    Read the article

  • .NET datetime issue with SQL stored procedure

    - by DanO
    I am getting the below error when executing my application on a Windows XP machine with .NET 2.0 installed. On my computer Windows 7 .NET 2.0 - 3.5 I am not having any issues. The target SQL server version is 2005. This error started occurring when I added the datetime to the stored procedure. I have been reading alot about using .NET datetime with SQL datetime and I still have not figured this out. If someone can point me in the right direction I would appreciate it. Here is the where I believe the error is coming from. private static void InsertRecon(string computerName, int EncryptState, TimeSpan FindTime, Int64 EncryptSize, DateTime timeWritten) { SqlConnection DBC = new SqlConnection("server=server;UID=InventoryServer;Password=pass;database=Inventory;connection timeout=30"); SqlCommand CMD = new SqlCommand(); try { CMD.Connection = DBC; CMD.CommandType = CommandType.StoredProcedure; CMD.CommandText = "InsertReconData"; CMD.Parameters.Add("@CNAME", SqlDbType.NVarChar); CMD.Parameters.Add("@ENCRYPTEXIST", SqlDbType.Int); CMD.Parameters.Add("@RUNTIME", SqlDbType.Time); CMD.Parameters.Add("@ENCRYPTSIZE", SqlDbType.BigInt); CMD.Parameters.Add("@TIMEWRITTEN", SqlDbType.DateTime); CMD.Parameters["@CNAME"].Value = computerName; CMD.Parameters["@ENCRYPTEXIST"].Value = EncryptState; CMD.Parameters["@RUNTIME"].Value = FindTime; CMD.Parameters["@ENCRYPTSIZE"].Value = EncryptSize; CMD.Parameters["@TIMEWRITTEN"].Value = timeWritten; DBC.Open(); CMD.ExecuteNonQuery(); } catch (System.Data.SqlClient.SqlException e) { PostMessage(e.Message); } finally { DBC.Close(); CMD.Dispose(); DBC.Dispose(); } } Unhandled Exception: System.ArgumentOutOfRangeException: The SqlDbType enumeration value, 32, is invalid. Parameter name: SqlDbType at System.Data.SqlClient.MetaType.GetMetaTypeFromSqlDbType(SqlDbType target) at System.Data.SqlClient.SqlParameter.set_SqlDbType(SqlDbType value) at System.Data.SqlClient.SqlParameter..ctor(String parameterName, SqlDbType dbType) at System.Data.SqlClient.SqlParameterCollection.Add(String parameterName, SqlDbType sqlDbType) at ReconHelper.getFilesInfo.InsertRecon(String computerName, Int32 EncryptState, TimeSpan FindTime, Int64 EncryptSize, DateTime timeWritten) at ReconHelper.getFilesInfo.Main(String[] args)

    Read the article

  • ubuntu 12.04 can't find root partition (it doesn't look for btrfs partitions) end up with kernel-panic [closed]

    - by zalesz
    Possible Duplicate: There's an issue with an Alpha/Beta Release of Ubuntu, what should I do? I'm running Ubuntu 12.04 from kernel v. 3.2.0-17 with all partitions formatted as BTRFS. It was everything ok till kernel 3.2.0-18/19. Now system don't load, after trying to run it with recovery there is a msg that kernel panic occurred cause there is no partition with ext3/4 and some other partitions but I don't see any btrfs alike type. Any ideas how to fix it? Best

    Read the article

  • Symfony2 Forms: is it possible to bind a form in an "unconventional way"?

    - by DonCallisto
    Imagine this scenario: in our company there is an employee that "play" around graphic,css,html and so on. Our new project will born under symfony2 so we're trying some silly - but "real" - stuff (like authentication from db, submit data from a form and persist it to db and so on..) The problem As far i know, learnt from symfony2 "book" that i found on the site (you can find it here), there is an "automated" way for creating and rendering forms: 1) Build the form up into a controller in this way $form = $this->createFormBuilder($task) ->add('task','text'), ->add('dueDate','date'), ->getForm(); return $this->render('pathToBundle:Controller:templateTwig', array('form'=>$form->createview()); 2) Into templateTwig render the template {{ form_widget(form) }} // or single rows method 3) Into a controller (the same that have a route where you can submit data), take back submitted information if($rquest->getMethod()=='POST'){ $form->bindRequest($request); /* and so on */ } Return to scenario Our graphic employee don't want to access controllers, write php and other stuff like those. So he'll write a twig template with a "unconventional" (from symfony2 point of view, but conventional from HTML point of view) method: /* into twig template */ <form action="{{ path('SestanteUserBundle_homepage') }}" method="post" name="userForm"> <div> USERNAME: <input type="text" name="user_name" value="{{ user.username}}"/> </div> <div> EMAIL: <input type="text" name="user_mail" value="{{ user.email }}"/> </div> <input type="hidden" name="user_id" value="{{ id }}" /> <input type="submit" value="modifica i dati"> </form> Now, if into the controller that handle the submission of data we do something like that public function indexAction(Request $request) { if($request->getMethod() == 'POST'){ // sono arrivato per via di un submit, quindi devo modificare i dati prima di farli vedere a video $defaultData = array('message'=>'ho visto questa cosa in esempio, ma non capisco se posso farne a meno'); $form = $this->createFormBuilder($defaultData) ->add('user_name','text') ->add('user_mail','email') ->add('user_id','integer') ->getForm(); $form->bindRequest($request); //bindo la form ad una request $data = $form->getData(); //mi aspetto un'array chiave=>valore /* .... */ We expected that $data will contain an array with key,value from the submitted form. We found that it isn't true. After googling for a while and try with other "bad" ideas, we're frozen into that. So, if you have a "graphic office" that can't handle directly php code, how can we interface from form(s) to controller(s) ? UPDATE It seems that Symfony2 use a different convention for form's field name and lookup once you've submitted that. In particular, if my form's name is addUser and a field is named userName, the field's name will be AddUser[username] so maybe it have a "dynamic" lookup method that will extract form's name, field's name, concat them and lookup for values. Is it possible?

    Read the article

  • XML Rules Engine and Validation Tutorial with NIEM

    - by drrwebber
    Our new XML Validation Framework tutorial video is now available. See how to easily integrate code-free adaptive XML validation services into your web services using the Java CAMV validation engine. CAMV allows you to build fault tolerant content checking with XPath that optionally use SQL data lookups. This can provide warnings as well as error conditions to tailor your validation layer to exactly meet your business application needs. Also available is developing test suites using Apache ANT scripting of validations.  This allows a community to share sets of conformance checking test and tools . On the technical XML side the video introduces XPath validation rules and illustrates and the concepts of XML content and structure validation. CAM validation templates allow contextual parameter driven dynamic validation services to be implemented compared to using a static and brittle XSD schema approach.The SQL table lookup and code list validation are discussed and examples presented.Features are highlighted along with a demonstration of the interactive generation of actual live XML data from a SQL data store and then validation processing complete with errors and warnings detection.The presentation provides a primer for developing web service XML validation and integration into a SOA approach along with examples and resources. Also alignment with the NIEM IEPD process for interoperable information exchanges is discussed along with NIEM rules services.The CAMV engine is a high performance scalable Java component for rapidly implementing code-free validation services and methods. CAMV is a next generation WYSIWYG approach that builds from older Schematron coding based interpretative runtime tools and provides a simpler declarative metaphor for rules definition. See: http://www.youtube.com/user/TheCAMeditor

    Read the article

  • How to create a temporary staging server on my home machine [closed]

    - by Homunculus Reticulli
    Possible Duplicate: What things required to host a website at home I want to create a temporary staging server which can be accessed (i.e. via browser) by other people that I want to show the website to (a business partner who is half way accross the world). IIRC, my ISP issues dynamic addresses so I may need to register with a (DNS server?) - not sure about this. Although I'm a software developer, I don't know much about the hardware side of things - and would appreciate help in getting me setup so I can show a website to a business partner. Here are the relevant details: Web server: Apache 2.2 OS: Ubuntu 10.0.4 LTS modem/router: ZyXel P-600

    Read the article

  • How does thumbnail preview in Ubuntu differ from that of Windows? [closed]

    - by Forbidden Overseer
    Possible Duplicate: How does Ubuntu know what file type a file without extension is I thought this question might get a better response in AskUbuntu, as it seems to have more to do with Ubuntu than Windows at a glance. Let's say I have a foo.mkv file. Thumbnail previews work in both Windows 7 and Ubuntu. When I change the filename to anything random like foo.bar or when I remove the extension itself (making it just foo), Nautilus shows thumbnails normally like if it can recognize what type of files they are - without looking at file extension. This however, doesn't happen in Windows 7. Windows starts asking me things like which application I want to use to open that file as soon as I remove file extension (forget thumbnails...) etc. So, How does this thumbnail preview work in Windows 7 and Ubuntu? What makes Ubuntu recognize files "out of the box" unlike Windows 7?

    Read the article

  • How to remove username from "Me" menu (right near to power button on the top panel)? [closed]

    - by Ivan
    Possible Duplicate: How do I replace the MeMenu username with my actual name? My user name takes approximately as long as would 6 icons. I don't need to see my username, I am the only user of my computer. How can I remove it? I use Ubuntu 10.10. UPDATE: The answer found: gconftool -s /system/indicator/me/display --type int 0 UPDATE: Unfortunately the solution doesn't work any more. Now it totally removes the instant messaging menu (including the icon) instead of just removing a name.

    Read the article

  • Why not write all tests at once when doing TDD? [closed]

    - by RichK
    Possible Duplicate: Why not write all tests at once when doing TDD? The Red - Green - Refactor cycle for TDD is well established and accepted. We write one failing unit test and make it pass as simply as possible. What are the benefits to this approach over writing many failing unit tests for a class and make them all pass in one go. The test suite still protects you against writing incorrect code or making mistakes in the refactoring stage, and code coverage should be just as high, so what's the harm? Sometimes it's easier to write all the tests first as a form of 'brain dump' to quickly write down all the expected behavior in one go.

    Read the article

  • Do you enjoy 'Unit testing' ? [closed]

    - by jibin
    Possible Duplicate: How have you made unit testing more enjoyable ? i mean we all are developers & we love coding.I love learning new stuff(languages, frameworks, even new domains like mobile/Tablet development). But Testing ? As a newbie to the corporate environment,I just can't digest it.(We follow 'write-then-manually-test pattern').is it unit testing ?.Usually a single developer handles a module(From design to code & unit testing).So is it practical ? Somebody tell me how to make unit testing fun ? Or just How to do it properly?Do we try all possibilities manually.Say unit test for a webpage with lot of 'javascript validations'. PS:projects are all web applications.

    Read the article

< Previous Page | 801 802 803 804 805 806 807 808 809 810 811 812  | Next Page >