Search Results

Search found 19528 results on 782 pages for 'form factor'.

Page 509/782 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • Upgrading from php 5.3 to php 5.4 with Macport

    - by dr.stonyhills
    PHP5.4 has been available for sometime now and Macport recently caught up with the release of port php54 but the process of upgrading is not as clear as possible. Even worst for those who are new to maintaining multiple versions of PHP on the same machine. I am keen on trying out some of the new features in PHP5.4 like traits, new array form etc but falling back on to php5.3 for other compatibility stuff. So i sudo port install php5+ (all the variants, apache2 etc) Then i tell it what PHP port to use as default sudo port select --set php php54 Check what version of PHP is active in the terminal using php -v outputs php 5.4.3. But i seem to be having issues with choosing the right non cli version as in the version of the module run by apache etc is still php5.3.12. Do i have to change the reference to the libphp5 in apache httpd.conf? Any advice on the right workflow for switching between php version on macport greatly appreciated!

    Read the article

  • Using HTML5 Today part 4&ndash;What happened to XHTML?

    - by Steve Albers
    This is the fourth entry in a series of descriptions & demos from the “Using HTML5 Today” user group presentation. For practical purposes, the original XHTML standard is a historical footnote, although XHTML transitional will probably live on forever in the default web page templates of old web page editors. The original XHTML spec was released in 2000, on the heels of the HTML 4.01 spec.  The plan was to move web development away from HTML to the more formal, rigorous approach that XHTML offered, but it was built on a principle that conflicts with the history and culture of the Internet: XHTML introduced the idea of Draconian Error Handling, which essentially means that invalid XML markup on a page will cause a page to stop rendering. There is a transitional mode offered in the original XHTML spec, but the goal was to move to D.E.H.  You can see the result by changing the doc type for a document to “application/xhtml+xml” - for my class example we change this setting in the web.config file: <staticContent> <remove fileExtension=".html" /> <mimeMap fileExtension=".html" mimeType="application/xhtml+xml" /> </staticContent> With the new strict syntax a simple error, in this case a duplicate </td> tag, can cause a critical page error: While XHTML became very popular in the ensuing decade, the Strict form of XHTML never achieved widespread use. Draconian Error Handling was one of the factors that led in time to the creation of the WHATWG, or Web Hypertext Application Technology Group.  WHATWG contributed to the eventually disbanding of the XHTML 2.0 working group and the W3C’s move to embrace the HTML5 standard. For developers who long for XML markup the W3C HTML5 standard includes an XHTML5 syntax. For the longer, more definitive look at what happened to XHTML and how HTML5 came to be check out the Dive Into HTML mirror site or Bruce Lawson’s “HTML5: Who, What, When Why” talk.

    Read the article

  • Internet Explorer 9 is coming Monday to a web near you

    - by brian_ritchie
    Internet Explorer 9 is finally here...well almost.  Microsoft is releasing their new browser on March 14, 2011. IE9 has a number of improvements, including: Faster, Faster, Faster.  Did I mention it is faster?   With the new browsers coming out from Mozilla, Google, and Microsoft, there have been a flood of speed test coverage.  Chrome has long held the javascript speed crown.  But according to Steven J. Vaughan-Nichols over at ZDNET..."for the moment at least IE9 is actually the fastest browser I’ve tested to date."  He came to this revelation after figuring out that the 32-bit version of IE9 has the new Chakra JIT (the 64-bit version doesn't).  It also has a DirectX-based rendering engine so it can do cool tricks once reserved for desktop applications. Windows 7 Desktop Integration.  Read my post for more details.  Unfortantely, they didn't integrate my ideas...at least not yet :) Hot new UI.  Ok, they "borrowed" some ideas from Chrome...but that is the best form of flattery. Standards Compliance.  A real focus on HTML5 and CSS3.  Definite goodness for developers. So, go get yourself some IE9 on Monday and enjoy! 

    Read the article

  • WCF - Automatically create ServiceHost for multiple services

    - by Rajesh Pillai
    WCF - Automatically create ServiceHost for multiple services Welcome back readers!  This blog post is about a small tip that may make working with WCF servicehost a bit easier, if you have lots of services and you need to quickly host them for testing. Recently I was encountered a situation where we were faced to create multiple service host quickly for testing.  Here is the code snippet which is pretty self explanatory.  You can put this code in your service host which in this case is  a console application. class Program   {       static void Main(string[] args)       { // Stores all hosts           List<ServiceHost> hosts = new List<ServiceHost>();           try           { // Get the services element from the serviceModel element in the config file               var section = ConfigurationManager.GetSection("system.serviceModel/services") as ServicesSection;               if (section != null)               {                   foreach (ServiceElement element in section.Services)                   { // NOTE : If the assembly is in another namespace, provide a fully qualified name here in the form // <typename, namespace> // For e.g. Business.Services.CustomerService, Business.Services                       var serviceType = Type.GetType(element.Name); // Get the typeName                        var host = new ServiceHost(serviceType);                       hosts.Add(host); // Add to the host collection                       host.Open(); // Open the host                   }               }               Console.ReadLine();           }           catch (Exception e)           {               Console.WriteLine(e.Message);               Console.ReadLine();           }           finally           {               foreach (ServiceHost host in hosts)               {                   if (host.State == CommunicationState.Opened)                   {                       host.Close();                   }                   else                   {                       host.Abort();                   }               }           }       }   } I hope you find this useful.  You can make this as a windows service if required.

    Read the article

  • Managing products on a an ecommerce site [closed]

    - by John
    I've had a site that sells widgets for many years. I do not inventory my widgets, but the cost of adding them to the site and makings sure the site is current is becoming cost prohibitive. Here are the facts: I sell a single class of widget. I have about 50,000 widgets on my site. I have about 100 vendors that create and dropship the products when they get an order from me via email. Each vendor carries from 50 to 5000 types of widgets. Vendors all have websites with images and descriptions of their products. Each widget is produced in limited supply and usually sell out in 1-5 years. Prices of the widget often go up, sometimes more than 50% before they sell out. My vendors aren't very tech sophisticated. They have websites with their products, but most can't supply an api or database dump. Their websites usually display retail prices to the public, but I login or refer to a price list (usually excel) for wholesale prices. As it stands now, I hire local people to add and describe each widget to our website. It usually takes a person 4 minutes to add a widget to the site. This doesn't include moving to a new vendor. I feel like the upload/edit process is as good as it can get via a form/website. The problem is that it is getting very expensive to upload and keep the widget inventory current. I often get orders for something after it's sold out from the vendor or the price is wrong. This seems like it would be a problem in many industries. Can anyone suggest the cheapest way to upload inventory and ensure prices are current from my vendors? I'm assuming it will involve outsourcing, but I would like ideas on how to setup the compensation model.

    Read the article

  • what constitutes out-of-band access to a server?

    - by broiyan
    The first time I access my server with a new installation of Filezilla or Putty, I will get prompted that I should continue only if the RSA key shown to me is correct. The cloud provider has advice on their website that I ought to use their AJAX console to get a key out-of-band with which to compare to the one shown by Filezilla. The AJAX console is launched from a link on the cloud provider's website which requires a login. Exactly how is this AJAX console considered to be out-of-band when it obviously is not a form of physical access to the server?

    Read the article

  • Intel SASWT4I SAS/SATA Controller Question

    - by Joe Hopfgartner
    Hey there! I want to assemble a cheap storage sytem based on the Norco RPC-4020 Case. When searching for controllers I found this one: Intel® RAID Controller SASWT4I This is a quote form the Spec Sheet: Scalability. Supports up to 122 physical devices in SAS mode which is ideal for employing JBODs (Just a Bunch Of Disks) or up to 14 devices in RAID 0, 1, 1E/10E mode through direct connect device attachment or through expander backplane support. Does that mean I can attatch 14 SATA drives directly to the controller using SFF-8087 - 4x SATA breakout cables? That would be nice because then I can choose a mainboard that has 6 Onboard SATA and i can connect all 20 bays while only spending 155$ on the controller and like another 100$ on cables. Would that work? And why is it 14 and not 16 when there are 4 Ports? I am really confused about all the breakout/fanout/(edge-)expanding/multiplying/channel stuff...

    Read the article

  • How successful is GPL in reaching its goals?

    - by StasM
    There are, broadly, two types of FOSS licenses when it relates to commercial usage of the code - let's say the GPL-type and the BSD-type. The first is, broadly, restrictive about commercial usage (by usage I also mean modification and redistribution, as well as creating derived works, etc.) of the code under the license, and the second is much more permissive. As I understand, the idea behind GPL-type licenses is to encourage people to abandon the proprietary software model and instead convert to the FOSS code, and the license is the instrument to entice them to do so - i.e. "you can use this nice software, but only if you agree to come to our camp and play by our rules". What I want to ask is - was this strategy successful so far? I.e. are there any major achievements in the form of some big project going from closed to open because of GPL or some software being developed in the open only because GPL made it so? How big is the impact of this strategy - compared, say, to the world where everybody would have BSD-type licenses or release all open-source code under public domain? Note that I am not asking if FOSS model is successful - this is beyond question. What I am asking is if the specific way of enticing people to convert from proprietary to FOSS used by GPL-type and not used by BSD-type licenses was successful. I also don't ask about the merits of GPL itself as the license - just about the fact of its effectiveness.

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-12

    - by Bob Rhubart
    15 Lessons from 15 Years as a Software Architect | Ingo Rammer In this presentation from the GOTO Conference in Copenhagen, Ingo Rammer shares 15 tips regarding people, complexity and technology that he learned doing software architecture for 15 years. Adding a runtime picker to a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena shows how to create an Oracle WebCenter popup to allow users to "select items or do more complex things." Oracle Identity Manager 11g R2 Catalog | Daniel Gralewski Oracle Fusion Middleware A-Team blogger Daniel Gralewski shares a detailed overview of the new Catalog feature, one of the most talked about features in the latest release of Oracle Identity Manager 11g. Cloud API and service designers, stop thinking small | Cloud Computing - InfoWorld "The focus must shift away from fine-grained APIs that provide some type of primitive service, such as pushing data to a block of storage or perhaps making a request to a cloud-rooted database," says InfoWorld's David Linthicum. "To go beyond primitives, you must understand how these services should be used in a much larger architectural context. In other words, you need to understand how businesses will employ these services to form real workplace solutions -- inside and outside the enterprise." Oracle Solaris 8 P2V with Oracle database 10.2 and ASM | Orgad Kimchi Orgad Kimchi's technical post illustrates the migration of "a Solaris 8 physical system, with Oracle database version 10.2.0.5 with ASM file-system located on a SAN storage, into a Solaris 8 branded zone inside a Solaris 10 guest domain on top of a Solaris 11 control domain." Thought for the Day "The hardest single part of building a software system is deciding precisely what to build. " — Fred Brooks Source: SoftwareQuotes.com

    Read the article

  • What are the options for retraining formally as a software engineer?

    - by Matt Harrison
    I'm a self-taught programmer. I have a good undergraduate degree in Architecture (building, not software). I was always a science/maths kid and got consistency good grades in these subjects. However I became indecisive at undergraduate level and switched between Physics, Chemistry, Art and finally stuck with Architecture mainly out of the desperate need to finish any degree. As soon as I graduated, I ditched architecture and started writing code again professionally. I've been a programmer now for 3 years and I've progressed very quickly. I'm ambitious and I want to work for the top companies in this field at some point and I've realised I need a Computer Science education to be taken seriously (based on job ads for the big tech firms). I've applied for a few MSc programs in Computer Science but they've all rejected me because of my BA. It's just not an option for me to quit my job and go back and do another 3 year undergraduate degree in CS. I know I can study at this level because I've read most of the books on the reading lists for CS courses in the UK that I can find and I have this knowledge now, it's just I can't prove it on an application form. What options are available to me?

    Read the article

  • How to set up Aptana Studio 3 with a Bitbucket private repo

    - by Titus
    I have just started playing around with Git and would like to push a personal project to a newly created, private repo on Bitbucket using Aptana Studio 3. I tried to use the Git integration in Aptana but I couldn't figure out where to enter my username and password for Bitbucket anywhere. I tried using the Team > Share Project context menu but that keeps throwing the following message: Warning: Permanently added the RSA host key for IP address '207.223.240.181' to the list of known hosts. Permission denied (publickey). fatal: The remote end hung up unexpectedly I'm pretty sure that's because my repo is private. However, I couldn't find a provision to provide any form of credentials for linking to a private repo. Any ideas?

    Read the article

  • Explaining Git to someone new to revision control

    - by MaxMackie
    I've recently decided to jump into the whole world of revision control to work on some open source projects I have. I looked around (subversion, mercurial, git, etc) and found that Git seemed to make more sense conceptually to me. I've set everything up on my computer (opensuse) and made an account on gitorious (let me know if there is a more simple/better hosting provider). I understand Git from a conceptual point of view (work locally, commit to a local repo, others can now checkout from you, right?). But where does gitorious come into play? I commit to them as well as committing locally? Apart from conceptually, I don't quite understand HOW it works when it comes to making a local repository and running git init inside a folder and that HEAD file. Keep in mind I have never used any form of revision control ever before. So even the most basic concepts are foreign to me. As I post this, I'm also reading up and trying to figure it out myself.

    Read the article

  • Data Archiving vs not

    - by Recursion
    For the sake of data integrity, is it wiser to archive your files or just leave them unarchived. No compression is being used. My thinking is that if you leave your files unarchived, if there is some form of corruption it will only hurt a smaller number of files. Though if you archive, lets say all of your documents, if there is even the slightest corruption, the entire archive is unrecoverable. So whats the best way to keep a clean file system, but not be subject to data corruption.

    Read the article

  • Concurrency pattern of logger in multithreaded application

    - by Dipan Mehta
    The context: We are working on a multi-threaded (Linux-C) application that follows a pipeline model. Each module has a private thread and encapsulated objects which do processing of data; and each stage has a standard form of exchanging data with next unit. The application is free from memory leak and is threadsafe using locks at the point where they exchange data. Total number of threads is about 15- and each thread can have from 1 to 4 objects. Making about 25 - 30 odd objects which all have some critical logging to do. Most discussion I have seen about different levels as in Log4J and it's other translations. The real big questions is about how the overall logging should really happen? One approach is all local logging does fprintf to stderr. The stderr is redirected to some file. This approach is very bad when logs become too big. If all object instantiate their individual loggers - (about 30-40 of them) there will be too many files. And unlike above, one won't have the idea of true order of events. Timestamping is one possibility - but it is still a mess to collate. If there is a single global logger (singleton) pattern - it indirectly blocks so many threads while one is busy putting up logs. This is unacceptable when processing of the threads are heavy. So what should be the ideal way to structure the logging objects? What are some of the best practices in actual large scale applications? I would also love to learn from some of the real designs of large scale applications to get inspirations from!

    Read the article

  • Object construction design

    - by James
    I recently started to use c# to interface with a database, and there was one part of the process that appeared odd to me. When creating a SqlCommand, the method I was lead to took the form: SqlCommand myCommand = new SqlCommand("Command String", myConnection); Coming from a Java background, I was expecting something more similar to SqlCommand myCommand = myConnection.createCommand("Command String"); I am asking, in terms of design, what is the difference between the two? The phrase "single responsibility" has been used to suggest that a connection should not be responsible for creating SqlCommands, but I would also say that, in my mind, the difference between the two is partly a mental one of the difference between a connection executing a command and a command acting on a connection, the latter of which seems less like what I have been lead to believe OOP should be. There is also a part of me wondering if the two should be completely separate, and should only come together in some sort of connection.execute(command) method. Can anyone help clear up these differences? Are any of these methods "more correct" than the others from an OO point of view? (P.S. the fact that c# is used is completely irrelevant. It just highlighted to me that different approaches were used)

    Read the article

  • VMware Fusion / New Machine / verify VMs?

    - by drewk
    I just got a 15" MacBook Pro with 8GB RAM, 2.66 i7 and 500GB SSD. Very sweet and fast! My old machine was 2.66 Core 2 Duo. I used Migration Assistant to migrate my old MBP to the new. When I started VMware to run Windows and Ubuntu I got the message first This VM has been moved or copied to a new machine. Did you move or copy it? I answered Copied. Then I got the message this CPU is a different configuration than the machine that created the VM. You may get unpredictable results. I answered Open Anyway. Windows XP, Windows 7, and Ubuntu all seem to work OK, but how can I verify? Is there some form of test harness for a VM? I have had a VM degrade and become unusable with very catastrophic data loss (on Parallels...), so I am a bit paranoid. Thanks,

    Read the article

  • How can I prevent users from installing software?

    - by Cypher
    Our organization is a bit different than most. During certain times of the year, we grow to thousands of employees, and during off-times, less than a hundred. Over the course of a few years, many thousands of people have come and gone in our offices, and left their legacy behind in the form of all sorts of unwanted, unapproved, (and sometimes unlicensed) software installs on our desktops. We are currently installing redundant domain controllers and upgrading current servers, all running Windows Server 2008 Enterprise, and will eventually be able to run a pure 2008 DC network. With that in mind, what are our options in being able to lock down users, such that they cannot install unauthorized software on systems without the assistance (or authorization) of the IT group? We need to support approximately 400 desktops, so automation is key. I've taken note of the Software Restrictions we can implement via Group Policy, but that implies that we already know what users will be installing and attempting to run... not quite so elegant. Any ideas?

    Read the article

  • "The protocol 'net.msmq' is not supported."

    - by Randolpho St. John
    OMG, a new lesson! Will wonders never cease? So I ran into an interesting issue setting up a WCF service to consume an MSMQ queue. I won't bother you with the details of how to actually build a WCF/MSMQ service; there are plenty of tutorials on the subject. I want to share with you an interesting error that I ran into and the surprisingly simple fix. The error occurs when attempting to generate a Service Reference or even simply browsing to the WSDL of your WCF/MSMQ service in the form of a YSOD with the following error: "The protocol 'net.msmq' is not supported." After a lot of Googling on the subject turning up plenty of questions with the same error but no answers. So I went digging into some application level config files on a server that already had a WCF/MSMQ service successfully set up by the network admin, and the answer was amazingly simple: If you are hosting an MSMQ/WCF service in IIS, you have to tell IIS to allow net.msmq protocol. It's in the advanced settings for the application or site in which you are hosting the service. .... aaaand, that's it.

    Read the article

  • How to prevent stretching, blurring and pixelating of embedded logos in VirtualDub?

    - by NoCanDo
    Howdy, take a look at this please http://www.youtube.com/watch?v=te-HVN8y_QE&hd=1 . Notice the embedded "logo" in the upper left corner? How blurry and pixelated it is? This is the original image: http://i.imagehost.org/0148/movie_watermark.png ! The stretching, blurring, pixelating etc. most likely comes from resizing the original video from 1920x1200 to 1280x720 and encoding it with h.264. Can anyone tell me how I can prevent the blurring, unsharpening and pixelating and retain their original quality? How do I exclude the logo from the whole encoding process and just slap it there in its original format and form?

    Read the article

  • uploading large files (mp4) to IIS 7.5 gives 500 Internal Server Error

    - by dragon112
    I made a website on which i need to be able to upload video files and it has worked for quite a while. However after a while it just stopped working and now it will give me the following IIS error message when i upload a video. Images do work (possibly due to their smaller size). I use an html form with PHP server sided script to upload. I have already set the user permissions for the entire inetpub to allow all actions for the IIS user. If you have any idea what it could be PLEASE tell me, have been trying to fix this for weeks now. Thanks in advance!

    Read the article

  • Announcing SharePoint Saturday Columbus 2010

    - by Brian Jackett
    It is with great pleasure that today I can announce the very first SharePoint Saturday Columbus.  SharePoint Saturday Columbus 2010 will be happening on August 14th at The Conference Center at OCLC in Dublin, OH.  As many of the readers of my blog may be aware I’ve attended or spoken at over half a dozen SharePoint Saturdays in the past 8 months alone, but this will be my first time actually organizing one.  Myself and a group of very dedicated individuals have been hard at work the past few months getting the ball rolling and we’re happy to see it taking shape.   Pertinent Resources Website – find announcements and up to the date details at www.SharePointSaturday.org/Columbus Twitter – follow us at @SPSColumbus Email – email us at [email protected] with any questions, comments, or concerns   What can you do?     There are three main areas that we are looking for your help at this time. Spread the word – simply put start spreading the word to friends, coworkers, user groups, clients, and anyone else you think may be interested in SharePoint Saturday Columbus 2010.  We’ll be opening registration in early July so look for an announcement with details closer to that timeframe. Sponsorship – if your company or a company you know is interested in sponsoring SharePoint Saturday Columbus 2010 we have many opportunity levels available.  Email [email protected] for more information and we’ll send you a sponsorship packet. Speakers – if you or someone you know is interested in presenting at SharePoint Saturday Columbus 2010 please fill out a speaker submission form found here and email it to [email protected] by July 10th. I hope you can join us for this great event!         -Frog Out

    Read the article

  • How to highlight full row in GNU screen

    - by dorelal
    I have a mac pro and now I am trying to use GNU screen instead of terminal. When I need to look at the log the I hit Ctrl A [ and I get into copy mode. However it is really hard to detect where my cursor is since it is of color red and it is underscore. Is it possible to get the cursor in a block form in copy mode. Or highlight the whole row. Any stronger visual indication of where my cursor is in copy mode would be of help. Thanks

    Read the article

  • Storing data offline with javascript

    - by Walker
    My question is about storing data offline and potentially whether I will need to bring in an outside programmer or could this be learned within a few weeks? The website I am working on will have an interface where users will login and go through a series of quizzes in the form of checkbox, drop down menus, and others. Each page/quiz area could have 20-100 total checkboxes in a series of 3-5 rows because of the comprehensive nature of course. This I can do - I know how to code the quiz and return a correct or incorrect answer based on each individual checkbox and present a cumulative score (ie: you got 57% correct). The issue lies in the fact that I would like to save the users results and keep them informed of their progress. When they complete all of the quizzes, I would like to have a visual output of their performance in each area. Storing the output from their results offline is where I think I may run into a problem with my lack of coding experience. I would also like to have a sidebar with their progress of each section (10-15) with a green percentage completion bar or a % correct which would draw from this. I have never had to code something that stores information like this offline - so back to my question - would it be better to learn the language needed or bring in a coder/developer for the back end stuff.

    Read the article

  • IIS 7.5 website application pool with 'full control' permissions hackable?

    - by Caroline Beltran
    Although I would never set this permission, I would like to know how a static html website with the permission mentioned in the title could be compromised. In my humble opinion, I would guess that this would pose no threat since a web visitor has no way to upload/edit/delete anything. What if the site was a simple PHP website that simply displayed ‘hello world’? What if this PHP site had a contact us form that was properly sanitized? Thank you EDIT: I should mention that restricting IIS to GET and POST requests only, otherwise people anybody can delete and upload content.

    Read the article

  • Can we put random URL entries on DNS

    - by ring bearer
    Using microsoft DNS All/most of our local hosts ( with in ) are in following domain *.company.org So a host name will look like mymachine001.company.org Is it possible to set up wild card DNS entries of the form ? *.subd.company.com Note: The URL ends with .com, all other hosts so far ever set up in the DNS were of the format *.company.org what i am trying to achieve is the following. A user with in internal network types a url http://someprefix.subd.company.com in browser and enters. Since there is a wild card entry in DNS, the user gets routed to host mapped to *.subd.company.com in the DNS Note : at the same time, company.com has a public DNS entry and that is mapped to a physical IP in some other network (data center)

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >