Search Results

Search found 31242 results on 1250 pages for 'looking for hosting'.

Page 766/1250 | < Previous Page | 762 763 764 765 766 767 768 769 770 771 772 773  | Next Page >

  • Differentiating between Hard and Soft Dependencies - Fedora Yum [closed]

    - by Sujit
    I will ask this with an example - I have installed gnash-plugin on fedora 64 bit with Yum. It pulled in following packages - Installing : agg-2.5-9.fc13.x86_64 1/6 Installing : gtkglext-libs-1.2.0-10.fc12.x86_64 2/6 Installing : boost-thread-1.44.0-7.fc14.x86_64 3/6 Installing : boost-date-time-1.44.0-7.fc14.x86_64 4/6 Installing : 1:gnash-0.8.8-4.fc14.x86_64 5/6 Installing : 1:gnash-plugin-0.8.8-4.fc14.x86_64 6/6 Now, I tested the plugin and I didn't like it. I want to remove all these above packages which got installed with the plugin as I don't longer going to need them. How can I do this? I checked remove-with-plugin for yum but it pulls in all the packages which are currently depending on the packages. I understand the thought process behind showing what packages are getting affected - but I am wondering if there is any way of looking at the history with what package got installed when I installed a certain package. When gnash-plugin wasn't there firefox was running fine with but after I installation firefox is now depends on this new plugin. Has any one worked on differentiating hard-dependencies(hard means the program will break if that package is not there) and soft-dependencies ( soft means the program may not get affected fatally) ?

    Read the article

  • Convert MP3 to AAC,FLAC to AAC (.NET/C#) FREE :)

    - by PearlFactory
    So I was tasked with looking at converting 10 million tracks from mp3 320k to AAC and also Converting from mp3 320k to mp3 128k After a bit of hunting around the tool you need to use is FFMPEG Download x64 WindowsAlso for the best results get the Nero AACEncoder Download Now the command line STEP 1(From Flac)ffmpeg -i input.flac -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of output.m4aor (From mp3)ffmpeg -i input.mp3 -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of output.m4aNow the output.m4a is a intermediate state that we now put a ACC wrapper on via FFMpeg STEP 2ffmpeg -i output.m4a -vn -acodec copy final.aacDone :) There are a couple of options with the FFMPEG library as in we can look at importing the librarys and manipulation the API for the direct result FFMPEG has this support. You can get the relevant librarys from HereThey even have the source if you are that keen :-)In this case I am going to wrap the command lines into c# external process threads.( For the app that i am building to convert the 10 million tracks there is a complex multithreaded app to support this novel code )//Arrange Metadata about Call Process myProcess = new Process();ProcessStartInfo p = new ProcessStartInfo();string sArgs = string.format(" -i {0} -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of {1}",inputfile,outputfil) ; p.FileName = "ffmpeg.exe" ; p.CreateNoWindow = true; p.RedirectStandardOutput = true; //p.WindowStyle = ProcessWindowStyle.Normal p.UseShellExecute = false;//Execute p.Arguments = sArgs; myProcess.StartInfo = p; myProcess.Start(); myProcess.WaitForExit();//Write details about call  myProcess.StandardOutput.ReadToEnd();Now in this case we would execute a 2nd call using the same code but with different sArgs to put the AAC wrapper on the m4a file. Thats it. So if you need to do some conversions of any kind for you ASP.net sites/apps this is a great start and super fast.. With conversion times of around 2-3 seconds all of this can be done on the fly:-)Justin Oehlmannref : StackOverflow.com

    Read the article

  • The Case of the Invisible Training Resource

    - by GGBlogger
    I’ve been at this programming business longer than I would like to admit. For that reason I am always looking for new training resources as anyone in this business knows all too well. I’ve looked at AppDev (way too expensive for my meager budget), LearnVisualStudio (I have a lifetime subscription), and several others. What appears to be a new version of AppDev called LearnDevNow has some good material and so it goes. So what does all this have to do with the title? I’ve been using Adobe’s Flex Builder 3 and now their latest Flash Builder 4 (a renaming of the Adobe Flex development environment). One of the offered perks on registering was a month’s subscription to Lynda.com. My first reaction was “What the heck is Lynda.com?” but I chose it and signed up. What a surprise I was in for. I’d never heard of them before but discovered one of the most comprehensive training resources I’ve ever seen – and all for $ 34.95 a month in the version that offers Exercise files. They do have a heavy focus on Adobe products but also cover a lot of Microsoft material. What bothered me is that in the time I’ve been in this business I’d never heard of them! ; Thus the allusion to “The Invisible Training Resource.” Not only do they offer beginner and in depth training but the syllabus and the instructors are some of the best I’ve seen in the industry. So I just feel that more folks need to know about this organization. If you need training in the venues they offer I can attest to the fact that they offer some of the best training available in this industry in my humble opinion. You really owe it to yourself to check out Lynda.com.

    Read the article

  • Inheriting projects - General Rules?

    - by pspahn
    This is an area of discussion I have long been curious about, but overall, I generally lack the experience to give myself an answer that I would fully trust. We've all been there, a new client shows up with a half-complete project they are looking to finish and launch. For whatever reason, they fired their previous developer, and it's now up to you to save the day. I am just finishing up a code review for a new client, and in my estimation is would be better to scrap what the previous developers built since and start from scratch. There's a ton of reasons why I am leaning toward this way, but it still makes me nervous since the client isn't going to want to hear "those last guys built you a big turd, and I can either polish it, or throw it in the trash". What are your general rules for accepting these projects? How do you determine whether it will be better to start from scratch or continue with the existing code base? What other extra steps might you take to help control client expectations, since the previous developer may have inflated those expectations beyond a reasonable level? Any other general advice?

    Read the article

  • Would Using a PHP Framework Be Beneficial in My Context?

    - by Fractal
    I've just started work at a small start-up company who mainly uses PHP to develop their front-end apps. I had no prior PHP experience before joining, and this has led to my apps becoming large pieces of spaghetti code. I essentially started by adding code to implement an initial feature, and then continued to hack in more code to implement further features – without much thought for the overall design. The apps themselves output XML to render on small mobile devices. I recently started looking into frameworks that I could use. I reckon an advantage would be that they seem to force developers to modularise their programs using good-practice design patterns. This seems great for someone in my position. The extra functions they provide, for example: interfacing with databases in such a way as to make SQL injection impossible, would be very useful too. The downside I can see is that there will be a lot of overhead for me in terms of the time taken to learn the framework itself (while still getting to grips with PHP itself). I'm also worried that it will be overkill for the scale of the apps we develop. They tend to be programs that interface with a fairly simple back-end DB, and will generate about 5 different XML screens. Probably around 1 or 2 thousand lines of code. The time it takes just to configure the frameworks may not be worth it. The final problem I can see is that developers in the company – who have to go over my code, and who do not know the PHP framework I may use – will have a much harder time understanding it. Given those pros and cons, I'm still not sure on what the best course of action will be; so any advice will be greatly appreciated.

    Read the article

  • DIY Carbonator Creates Pop Rocks Like Fizzy Fruit [Science]

    - by Jason Fitzpatrick
    If you’ve ever sat around wishing that scientists would stop wasting time trying to solve pressing global problems and instead genetically engineer a bizarre but delicious hybrid of Pop Rocks candy and wholesome fruit, this mad scientist experiment is for you. Over at Evil Mad Scientist Laboratories they share a really fun weekend project. Contributor Rich Faulhaber was looking for a way to make eating fruit extra fun and science-infused for his kids. His solution? Build a homemade carbon dioxide injector that infuses fruit with carbonation. Having trouble imagining that? Envision a bowl of strawberries where every strawberry burst into a crazy flurry of strawberry flavor and champagne bubbles every time you bit into it. Fizzy fruit! Hit up the link below to see how he took pretty common parts: a C02 tank from a paint ball gun, a water filter canister from the hardware store, and other cheap and readily available parts (with the exception of the gas regulator which he suggests you shop garage sales and surplus stores to find a deal on), and combined them together to create a C02 fruit infuser. Hit up the link below to read more about his setup and the procedure he uses to infuse fruit with carbonation. The C02inator [Evil Mad Scientist Laboratories via Hack a Day] HTG Explains: What Are Character Encodings and How Do They Differ?How To Make Disposable Sleeves for Your In-Ear MonitorsMacs Don’t Make You Creative! So Why Do Artists Really Love Apple?

    Read the article

  • Long term plan of attack to learn math?

    - by zhenka
    I am a web-developer with a desire to expand my skill-set to mathematics relevant to programming. As 2nd career, I am stuck in college doing some of the requirements while working. I was hoping the my education will teach me the needed skills to apply math, however I am quickly finding it to be too much easily-testable breadth-based approach very inefficient for the time invested. For example in my calculus 2 class, the only remotely useful mind expanding experience I had was volumes and areas under the curve. The rest was just monotonous glorified algebra, which while comes easy to me, could be done by software like wolfram alpha within seconds. This is not my idea of learning math. So here I am a frustrated student looking for a way to improve my understanding of math in a way that focuses on application, understanding and maximally removed needless tedium. However I cannot find a good long term study strategy with this approach in mind. So for those of like mind, how would you go about learning the necessary math without worrying too much about stuff a computer can do much better?

    Read the article

  • Matching the superclass's constructor's parameter list, is treating a null default value as a non-null value within a constructor a violation of LSP?

    - by Panzercrisis
    I kind of ran into this when messing around with FlashPunk, and I'm going to use it as an example. Essentially the main sprite class is pretty much class Entity. Entity's constructor has four parameters, each with a default value. One of them is graphic, whose default value is null. Entity is designed to be inherited from, with many such subclasses providing their own graphic within their own internal workings. Normally these subclasses would not have graphic in their constructor's parameter lists, but would simply pick something internally and go with it. However I was looking into possibly still adhering to the Liskov Substitution Principal. Which led me to the following example: package com.blank.graphics { import net.flashpunk.*; import net.flashpunk.graphics.Image; public class SpaceGraphic extends Entity { [Embed(source = "../../../../../../assets/spaces/blank.png")] private const BLANK_SPACE:Class; public function SpaceGraphic(x:Number = 0, y:Number = 0, graphic:Graphic = null, mask:Mask = null) { super(x, y, graphic, mask); if (!graphic) { this.graphic = new Image(BLANK_SPACE); } } } } Alright, so now there's a parameter list in the constructor that perfectly matches the one in the super class's constructor. But if the default value for graphic is used, it'll exhibit two different behaviors, depending on whether you're using the subclass or the superclass. In the superclass, there won't be a graphic, but in the subclass, it'll choose the default graphic. Is this a violation of the Liskov Substitution Principal? Does the fact that subclasses are almost intended to use different parameter lists have any bearing on this? Would minimizing the parameter list violate it in a case like this? Thanks.

    Read the article

  • Why old (301) links stay on Google when breaking site down to multiple domains

    - by Sampo Sarrala
    Some background: We did have single site and single domain (let's call it mainsite.com) with product information, however things have changed since and product database has grown fast. So we decided to move some major products/manufacturers under their own domains (let's call one of them subsite.com) while still using our main database/codebase. What we've done: Added subsite.com domain for product 1 by Great Products Co. Some new nice looking front pages, info pages, etc. Detail pages that will use information from original db. Redirected product/group links from mainsite.com using 301 redirect. Verified that redirects works as expected. Waited some time for Google reindexing (over 30 days, I've heard it should be more than enough). Results: If I search our moved products from Google then it will found them and list them but with old links to our main page like mainsite.com/group/product1 but it should show link to new site subsite.com/product1. Links from Goole redirects as they should, as said redirects are verified [301]. Main question: Any reasons why Google would not follow 301 redirects and update links so that they will point to our new mfg/product site subsite.com?

    Read the article

  • Isolating test data in acceptance tests

    - by Matt Phillips
    I'm looking for guidance on how to keep my acceptance tests isolated. Right now the issue I'm having with being able to run the tests in parallel is the database records that are manipulated in the tests. I've written helpers that take care of doing inserts and deletes before tests are executed, to make sure the state is correct. But now I can't run them in parallel against the same database without uniquely generating the test data fields for each test. For example. Testing creating a row i'll delete everything where column A = foo and column B = bar Then I'll navigate through the UI in the test and create a record with column A = foo and column B = bar. Testing that a duplicate row is not allowed to be created. I'll insert a row with column A = foo and column B = bar and then use the UI to try and do the exact same thing. This will display an error message in the UI as expected. These tests work perfectly when ran separately and serially. But I can't run them at the same time for fear that one will create or delete a record the other is expecting. Any tips on how to structure them better so they can be run in parallel?

    Read the article

  • Statistical Software Quality Control References

    - by Xodarap
    I'm looking for references about hypothesis testing in software management. For example, we might wonder whether "crunch time" leads to an increase in defect rate - this is a surprisingly difficult thing to do. There are many questions on how to measure quality - this isn't what I'm asking. And there are books like Kan which discuss various quality metrics and their utilities. I'm not asking this either. I want to know how one applies these metrics to make decisions. E.g. suppose we decide to go with critical errors / KLOC. One of the problems we'll have to deal with with that this is not a normally distributed data set (almost all patches have zero critical errors). And further, it's not clear that we really want to examine the difference in means. So what should our alternative hypothesis be? (Note: Based on previous questions, my guess is that I'll get a lot of answers telling me that this is a bad idea. That's fine, but I'd request that it's based on published data, instead of your own experience.)

    Read the article

  • Force Blank TextBox with ASP.Net MVC Html.TextBox

    - by Doug Lampe
    I recently ran into a problem with the following scenario: I have data with a parent/child data with a one-to-many relationship from the parent to the child. I want to be able to update parent and existing child data AND add a new child record all in a single post. I don't want to create a model just to store the new values. One of the things I LOVE about MVC is how flexible it is in dealing with posted data.  If you have data that isn't in your model, you can simply use the non-strongly-typed HTML helper extensions and pass the data into your actions as parameters or use the FormCollection.  I thought this would give me the solution I was looking for.  I simply used Html.TextBox("NewChildKey") and Html.TextBox("NewChildValue") and added parameters to my action to take the new values.  So here is what my action looked like: [HttpPost] public ActionResult EditParent(int? id, string newChildKey, string newChildValue, FormCollection forms) {     Model model = ModelDataHelper.GetModel(id ?? 0);     if (model != null)     {         if (TryUpdateModel(model))         {             if (ModelState.IsValid)             {                 model = ModelDataHelper.UpdateModel(model);             }             string[] keys = forms.GetValues("ChildKey");             string[] values = forms.GetValues("ChildValue");             ModelDataHelper.UpdateChildData(id ?? 0, keys, values);             ModelDataHelper.AddChildData(id ?? 0, newChildKey, newChildValue);             model = ModelDataHelper.GetModel(id ?? 0);         }        return View(report);     }    return new EmptyResult(); } The only problem with this is that MVC is TOO smart.  Even though I am not using a model to store the new child values, MVC still passes the values back to the text boxes via the model state.  The fix for this is simple but not necessarily obvious, simply remove the data from the model state before returning the view: ModelState.Remove("NewChildKey"); ModelState.Remove("NewChildValue"); Two lines of code to save a lot of headaches.

    Read the article

  • I know fundamental programming. But how do I get started in game development now?

    - by Rohan Menon
    I'm a 20 year old programming student. I know fundamental programming in BASIC, C, C++ and JAVA. What I wanted to ask is, where do I go from here? Are there any books that the community can mention that will help me develop a game or at least learn game development? I've had a lot of ideas and really want to make some sort of prototype to see if I'm suited for the industry. I really don't mind learning any new languages but I need to know what I should begin with. A good book that will help with a little more understanding as I go up will be very helpful. Maybe a tutorial to develop some basic 2D games like a side-scroller, snake or pocket tanks in an easy to understand SDK? I know that to get some credit under your belt, you need to be able to make a few games on your own. Also, what platform should I start on? The PC, iOS or Android (as an introduction) for now. I don't want to get into high level game design just yet. Just something a bit basic to help out in future development. Anything pointing me in the right direction will be really really helpful. Edit: Also, I want to say that I'm looking towards this from a game designer's point of view more than a game programmer. I want suggestions on any SDKs or easy to use programs I can use to understand game design. Then delve deeper into the programming after that. Not as employment but as developing your own games (for now).

    Read the article

  • Allow access to WordPress site only by links in email newsletters

    - by Shane
    I send out a personal email newsletter, and have been looking into sending it via some service like MailChimp, or sendy.co. Many of these email services suggest, or require, the email newsletter content to be available online, in case the recipient's email app doesn't render it properly, or at all. The thing is I don't want my newsletter contents visible to the whole world. Nor do I want to require existing recipients to make accounts/be assigned accounts, with passwords. So, the question is: How can my WordPress site content be viewable only by clicking on the link to it in the email newsletter. It can't be found in a Google search; but once at the site the visitor can view previous newsletter contents. It seems an .htaccess file would do the trick, but I have been unable to figure out the syntax for this. Thanks for your help. I have copied below two other questions, and answers, which have helped me word my question clearly. Similar to this request about allowing access to a certain group while still restricting access to the world: Is there a way to password protect directory only in cpanel. But the user should not be prompted the password, when they try to access it via web? This persons question is the closest I could find to my situation: Restrict direct folder access via .htaccess except via specific links

    Read the article

  • Sort algorithms that work on large amount of data

    - by Giorgio
    I am looking for sorting algorithms that can work on a large amount of data, i.e. that can work even when the whole data set cannot be held in main memory at once. The only candidate that I have found up to now is merge sort: you can implement the algorithm in such a way that it scans your data set at each merge without holding all the data in main memory at once. The variation of merge sort I have in mind is described in this article in section Use with tape drives. I think this is a good solution (with complexity O(n x log(n)) but I am curious to know if there are other (possibly faster) sorting algorithms that can work on large data sets that do not fit in main memory. EDIT Here are some more details, as required by the answers: The data needs to be sorted periodically, e.g. once in a month. I do not need to insert a few records and have the data sorted incrementally. My example text file is about 1 GB UTF-8 text, but I wanted to solve the problem in general, even if the file were, say, 20 GB. It is not in a database and, due to other constraints, it cannot be. The data is dumped by others as a text file, I have my own code to read this text file. The format of the data is a text file: new line characters are record separators. One possible improvement I had in mind was to split the file into files that are small enough to be sorted in memory, and finally merge all these files using the algorithm I have described above.

    Read the article

  • CPU Wars Is a Trump-Style Card Game Driven by Chip Stats

    - by Jason Fitzpatrick
    If you’re looking for the geekiest card game around, you’d be hard pressed to beat CPU Wars–a top-trumps card game built around CPU specs. From the game’s designers: CPU Wars is a trump card game built by geeks for geeks. For Volume 1.0 we chose 30 CPUs that we believe had the greatest impact on the desktop history. The game is ideally played by 2 or 3 people. The deck is split between the players and then each player takes a turn and picks a category that they think has the best value. We have chosen the most important specs that could be numerically represented, such as maximum speed achieved and maximum number of transistors. It’s lots of fun, it has a bit of strategy and can be played during a break or over a coffee. If you’re interested, you can pick up a copy for £7.99 (roughly $12.50 USD). Hit up the link below for more information. How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More 47 Keyboard Shortcuts That Work in All Web Browsers How To Hide Passwords in an Encrypted Drive Even the FBI Can’t Get Into

    Read the article

  • What is the best way to restrict access to adult content on Ubuntu?

    - by Stephen Myall
    I bought my kids a PC and installed 12.04 (Unity) on it. The bottom line is, I want my children to use the computer unsupervised while I have confidence they cannot access anything inappropriate. What I have looked at: I was looking at Scrubit a tool which allows me configure my wifi router to block content and this solution would also protect my other PC and mobile devices. This may be overkill as I just want the solution to work on one PC. I also did some Google searches and came across the application called Nanny (it seems to look the part). My experience of OSS is that the best solutions frequently never appear first on a Google search list and in this case I need to trust the methods therefore my question is very specific. I want to leverage your knowledge and experience to understand “What is the best way to restrict adult content on 12.04 LTS” as this is important to me. It maybe a combination of things so please don't answer this question "try this or that", then give me some PPA unless you can share your experience of how good it is and of course if there are any contraints. Thanks in advance

    Read the article

  • Book Review: Professional WCF 4

    - by Sam Abraham
    My Investigation of WCF internals have set the right stage to revisit Professional WCF 4 by Pablo Cibraro, Kurt Claeys, Fabio Cozzolino and Johann Grabner. In this book, the authors dive deep into all aspects of the WCF API in a reading targeted towards intermediate and advanced developers. Book quality so far as presentation, code completeness, content clarity and organization was superb. The authors have taken a hands-on approach to thoroughly covering the WCF 4.0 API with three chapters totaling 100+ pages completely dedicated to business cases with downloadable source code readily available. Chapter 1 outlines SOA best-practice considerations. Next three chapters take a top-down approach to the WCF API covering service and data contracts, bindings, clients, instancing and Workflow Services followed by another carefully-thought three chapters covering the security options available via the WCF API. In conclusion, Professional WCF 4.0 provides a thorough coverage of the WCF API and is a recommended read for anybody looking to reinforce their understanding of the various features available in the WCF framework. Many thanks to the Wiley/Wrox User Group Program for their support of our West Palm Beach Developers’ Group.   All the best, --Sam

    Read the article

  • The Oracle OpenWorld 2012 Call for Papers Closes April 9

    - by Kerrie Foy
    It is On! Oracle OpenWorld 2012 Call for Papers is closes April 9.   This year's OpenWorld event is September 30  - October 4, Moscone Center, San Francisco. Oracle OpenWorld is among the world’s largest industry events for good reason. It offers a vast array of learning and networking opportunities in one of the planet’s great cities.  And one of the key reasons for its popularity is the prominence of presentations by customers. If you would like to deliver a presentation based on your experience, now is the time to submit your abstract for review by the selection panel. The competition is strong: roughly 18% of entries are accepted each year from more than 3,000 submissions. Review panels are made up of experts both internal and external to Oracle. Successful submissions often (but not exclusively) focus on customer successes, how-tos, or best practices. http://www.oracle.com/openworld/call-for-papers/information/index.html What is in it for you? Recognition, for one thing. Accepted sessions are publicized in the content catalog, which goes live in mid-June, and sessions given by external speakers often prove the most popular. Plus, accepted speakers get a complimentary pass to Oracle OpenWorld with access to all sessions and networking events- that could save you up to $2,595! Be sure designate your session for inclusion in the correct track by selecting  “APPLICATIONS: Product Lifecycle Management from the Primary Track drop down menu. Looking forward to seeing you at this year's OpenWorld!

    Read the article

  • Oracle and Cavium to work together on Java SE 8 on 64-bit ARMv8

    - by Henrik Stahl
    We have been working for some time on a standard Oracle JDK 8 port to the upcoming introduction of 64-bit servers based on the new ARMv8 micro architecture. At ARM TechCon 2013 in Santa Clara, California, we announced a roadmap with an expected GA in 2015. This project is going very well and is ahead of schedule. We will soon be at the point where we will make binaries available outside of Oracle - first in a managed beta program with select customers/partners, and sometime during the fall of 2014 as a public early access program. Unless something changes, we are looking at a early 2015 GA. We should be able to share a detailed ramp down and GA plan by JavaOne 2014. One of the things we (obviously) need to produce a high-quality port is hardware for development and QA. We are therefore happy to announce that we will be collaborating with Cavium on this project. Cavium has been a supporter of the Java ecosystem for a long time and we have numerous joint customers running various Java versions on Cavium MIPS and ARM-based hardware. Cavium has now agreed to provide us with development hardware and engineering resources so that we can certify and optimize the initial Oracle JDK 8 release on Cavium's ThunderX hardware. This is expected to improve quality and performance of JDK 8 on ARMv8 in general, as well as on Cavium's hardware. For more information: Cavium announcement on the ThunderX product family Cavium announcement on Oracle collaboration As a reminder, we plan to release the Oracle JDK 8 port to 64-bit ARMv8 under the royalty-free (for general purpose servers etc) Binary Code License, but we have no current plans to open source it.

    Read the article

  • Apt-get saying "Unable to correct problems, you have held broken packages."

    - by YatharthROCK
    TL;DR: sudo apt-get install ... saying "Unable to correct problems, you have held broken packages." The problem I was trying to get the WebApps feature for PP and QQ following this blog post. I ran the sudo add-apt-repository ppa:webapps/preview command to add the repository, but i got a connection error. Since I know my current ISP gives a shaky connection, I tried again and sure enough, it worked. Then I ran sudo apt-get install unity-webapps-preview, but I realized we had to update apt-get first, so I hit Ctrl + C to stop it. Then I ran sudo apt-get update which worked w/o a fuss, but when I ran sudo apt-get install unity-webapps-preview again later, it showed an error message. Here's the dump: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: unity-webapps-preview : Depends: xul-ext-unity but it is not going to be installed Depends: xul-ext-websites-integration but it is not going to be installed Depends: xul-ext-webaccounts but it is not going to be installed E: Unable to correct problems, you have held broken packages. I think this might be because of me interrupting the earlier command. It hadn't got a chance to output anything, though — I stopped it pretty fast. What I tried I have tried running a number of commands:- sudo apt-get install --fix-broken sudo apt-get autoclean sudo apt-get autoremove sudo apt-get -f install sudoapt-get install ppa-purgeandsudo ppa-purge ppa:webapps/preview` Even after running sudo apt-get upgrade after every try, none of them worked. Research I tried searching Google, looking at a couple of forums and searching on AU, but to no avail. Help would be appreciated.

    Read the article

  • Which one is better offline method for large scale application

    - by Manish Pansiniya
    We've a big data management website used by several property. Some of our customers have downtime (they can't access net for an hour or two). We want our site to support offline data viewing and inventory management (typical data search and add/remove) and when the user goes online we can sync the changes back to our central database. Customers use several platforms like Windows, iOS, etc. We've been looking into several different options, here are the major choices - Develop offline web app supported in HTML5. Develop a 'fallback' mechanism and interact with data from the app cache as explained in here (http://www.htmlgoodies.com/html5/tutorials/introduction-to-offline-web- applications-using-) Develop a desktop based cross platform solution. I remember the old MONO which has been popular. Here's a post (What do you suggest for cross platform apps, including web cross-platform-apps-including-web) and another one (Technology choice for cross platform development (desktop and phone)? platform-development-desktop-and-phone?rq=1) I realize the the desktop solution might be hard to maintain and result in some compatibility issues and demand test environments. Can HTML5 handle moderate to high level complexity and data flow? Or would it be better to rely on a desktop based app for better scalability & performance?

    Read the article

  • Exempt programs from using active VPN connection

    - by Oxwivi
    When I connect to a VPN, all my network traffic is automatically routed through it. Is there a way to add exemptions to that? I don't know if adding exceptions has anything to do with the VPN protocol, but the VPN I'm using is of the OpenVPN protocol. Speaking of OpenVPN, why is it not installed by default on Ubuntu installs unlike PPTP? I could not get the list of IRCHighWay's servers, and this is the result I get trying to connect on XChat with running the bash script running: * Looking up irc.irchighway.net * Connecting to irc.irchighway.net (65.23.153.98) port 6667... * Connected. Now logging in... * You have been K-Lined. * *** You are not welcome on this network. * *** K-Lined for Open proxies are not allowed. (2011/02/26 01.21) * *** Your IP is 173.0.14.9 * *** For assistance, please email [email protected] and include everything shown here. * Closing Link: 0.0.0.0 (Open proxies are not allowed. (2011/02/26 01.21)) * Disconnected (Remote host closed socket). The IP 173.0.14.9 is the one due to my VPN. I had forgotten to check ip route list before running the script, and this is the one after running it: ~$ ip route list 99.192.193.241 dev ppp0 proto kernel scope link src 173.0.14.9 173.0.14.2 via 192.168.1.1 dev eth1 proto static 173.0.14.2 via 192.168.1.1 dev eth1 src 192.168.1.3 192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.3 metric 2 169.254.0.0/16 dev eth1 scope link metric 1000 default dev ppp0 proto static Oh and running the script returned this output: ~$ sudo bash irc_route.sh Usage: inet_route [-vF] del {-host|-net} Target[/prefix] [gw Gw] [metric M] [[dev] If] inet_route [-vF] add {-host|-net} Target[/prefix] [gw Gw] [metric M] [netmask N] [mss Mss] [window W] [irtt I] [mod] [dyn] [reinstate] [[dev] If] inet_route [-vF] add {-host|-net} Target[/prefix] [metric M] reject inet_route [-FC] flush NOT supported I ran the script after connecting to the VPN.

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • Language parsing to find important words

    - by Matt Huggins
    I'm looking for some input and theory on how to approach a lexical topic. Let's say I have a collection of strings, which may just be one sentence or potentially multiple sentences. I'd like to parse these strings to and rip out the most important words, perhaps with a score that denotes how likely the word is to be important. Let's look at a few examples of what I mean. Example #1: "I really want a Keurig, but I can't afford one!" This is a very basic example, just one sentence. As a human, I can easily see that "Keurig" is the most important word here. Also, "afford" is relatively important, though it's clearly not the primary point of the sentence. The word "I" appears twice, but it is not important at all since it doesn't really tell us any information. I might expect to see a hash of word/scores something like this: "Keurig" => 0.9 "afford" => 0.4 "want" => 0.2 "really" => 0.1 etc... Example #2: "Just had one of the best swimming practices of my life. Hopefully I can maintain my times come the competition. If only I had remembered to take of my non-waterproof watch." This example has multiple sentences, so there will be more important words throughout. Without repeating the point exercise from example #1, I would probably expect to see two or three really important words come out of this: "swimming" (or "swimming practice"), "competition", & "watch" (or "waterproof watch" or "non-waterproof watch" depending on how the hyphen is handled). Given a couple examples like this, how would you go about doing something similar? Are there any existing (open source) libraries or algorithms in programming that already do this?

    Read the article

< Previous Page | 762 763 764 765 766 767 768 769 770 771 772 773  | Next Page >