Search Results

Search found 25946 results on 1038 pages for 'cost based optimizer'.

Page 439/1038 | < Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >

  • Auto mount USB drive with permissions for all users

    - by oneaustin
    I have an Ubuntu 14.04 based Media Center and I store the media files on a USB HDD. I add files to drive directly on a Mac so I have it formatted as FAT32. The problem is that after reconnecting the drive to the Ubuntu, it mounts at /media/user/drivename and only the root user is allowed access. I need several applications to have full access to this drive. I can change file permissions in the terminal but it doesn't change because of the /media/user location. I am able to manually sudo mount /dev/sdc1 /media/drivename & sudo chmod 777 /media/drivename but the mount point changes each time. Is there a way to make this drive always mount where root and other applications have access?

    Read the article

  • Do they ask too much on this job?

    - by user58404
    I am looking for web developer job and this job description caught my eyes. I am not sure how much they offer but I was wondering if anyone here meets all of their requirements? To me, that's a lot of knowledge. 2 to 4+ years experience building web sites and applications in a professional environment Strong working knowledge of HTML5 and CSS3 Strong working knowledge of JavaScript, jQuery, AJAX Working knowledge of Ruby on Rails or similar MVC framework Working knowledge of ExpressionEngine, Wordpress or similar CMS Experience administering a LAMP-based server Experience with cross-platform and cross-browser website testing Comfortable working with version control (preferably Git) Proficient with Adobe Photoshop, Illustrator, and Fireworks Comfortable working on a Mac Self-starter with excellent time-management skills with the ability to meet challenging deadlines Ability to work independently with minimal supervision Desire to work on a small team Bonus Skills: Experience deploying to Heroku or similar PaaS provider. Experience developing Facebook applications A strong sense of design Cool open source projects (send us your Github account!) Advanced working knowledge of server administration and website deployment. Java and/or .NET experience

    Read the article

  • Is a yobibit really a meaningful unit? [closed]

    - by Joe
    Wikipedia helpfully explains: The yobibit is a multiple of the bit, a unit of digital information storage, prefixed by the standards-based multiplier yobi (symbol Yi), a binary prefix meaning 2^80. The unit symbol of the yobibit is Yibit or Yib.1[2] 1 yobibit = 2^80 bits = 1208925819614629174706176 bits = 1024 zebibits[3] The zebi and yobi prefixes were originally not part of the system of binary prefixes, but were added by the International Electrotechnical Commission in August 2005.[4] Now, what in the world actually takes up 1,208,925,819,614,629,174,706,176 bits? The information content of the known universe? I guess this is forward thinking -- maybe astrophyics or nanotech, or even DNA analysis really will require these orders of magnitude. How far off do you think all this is? Are these really meaningful units?

    Read the article

  • Android Dynamic 2D Map

    - by Deltharis
    My problem is, I want to create a 2D tiled map. Yes, I know it's been asked a lot. I've seen answers that propose the use of tiled however it only allows (or so it seems to me) to generate static maps that do not change once generated. And I need a large empty uniform space of empty tiles, upon which players may place various buildings (some spanning more than one tile and logically being the same one). How to approach this in Android? Do I make some kind of TableLayout, use arbitrarly large amount of rows and imageviews (with my emptyTile), than somehow work event-based changing of image ids from there? I'd think that only a portion of that map should be visible at a time, but I don't see how scrolling around could be the part of that structure.

    Read the article

  • nfs mountpoint named ``share'' breaks ls and man

    - by freddyb
    I mounted a nfs server to ~/share. This works fine as long as I'm at home, where the nfs share is in reach. Whenver I'm not, this seems to break access to all manpages. Using man (or ls in my homedir) waits forever. Checking with strace reveals that they try to access the folder called share. Unmounting fails too. Even with -l (lazy) and -f (force). I am asking for three things here: Is ``share'' a magic name? Does something like MANPATH exist, which I should avoid? How do I unmount without rebooting? (I already commented the share out in fstab) What would you suggest me to do, to have network/position based mounting of NFS shares?

    Read the article

  • MySQL: Best of Breed Database

    - by Bertrand Matthelié
    Oracle offers best of breed technology at every layer of the stack, from servers and storage to applications. Discover why MySQL is a best of breed database solution for: Web-based applications, including the next generation of highly demanding web, cloud, mobile and social application Distributed applications requiring a powerful and reliable embedded database Custom and departmental enterprise applications on Windows and other platforms Check out our Resource Center to get access to white papers and other resources. And, remember to register for MySQL Connect if you haven’t done so yet. You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • Twitter Customer Sentiment Analysis

    - by Liam McLennan
    The breakable toy that I am currently working on is a twitter customer sentiment analyser. It scrapes twitter for tweets relating to a particular organisation, applies a machine learning algorithm to determine if the content of tweet is positive or negative, and generates reports of the sentiment data over time, correlated to dates, events and news feeds. I’m having lots of fun building this, but I would also like to learn if there is a market for quantified sentiment data. So that I can start to show people what I have in mind I have created a mockup of the simplest and most important report. It shows customer sentiment over time, with important events highlighted. As the user moves their mouse to the right (forward in time) the source data area scrolls up to display the tweets from that time. The tweets are colour coded based on sentiment rating. After I started working on this project I discovered that a team of students have already built something similar. It is a lot of fun to enter your employers name and see what it says.

    Read the article

  • Designing a user-defined list to be stored in a relational database - Should I include user index?

    - by Zaemz
    By index, I mean, as the user creates the list, each item receives an integer index for its place in that particular list. Since there will be a table of ListItems, I'd prefer to avoid using the name "Index" for the field. Then I was thinking - should I even include the list index in the database? I figured I would because the list would be created in the same fashion every time, then. Or I could order the list for the user based on its actual primary key, since the list items are created in succession anyway... What should I do?

    Read the article

  • Is using build-in sorting considered cheating in practice tests?

    - by user10326
    I am using one of the practice online judges where a practice problem is asked and one submits the answer and gets back if it is accepted or not based on test inputs. My question is the following: In one of the practice tests, I needed to sort an array as part of the solution algorithm. If it matters the problem was: find 2 numbers in an array that add up to a specific target. As part of my algorithm I sorted the array, but to do that I used Java's quicksort and not implement sorting as part of the same method. To do that I had to do: java.util.Arrays.sort(array); Since I had to use the fully qualified name I am wondering if this is a kind of "cheating". (I mean perhaps an online judge does not expect this) Is it? In a formal interview (since these tests are practice for interview as I understand) would this be acceptable?

    Read the article

  • Installing Ubuntu 13.10 'Saucy Salamander' on L75D-A7280 hangs on a black screen

    - by Riven
    Trying to get Ubuntu 13.10 to work and it will not. Seen that somebody is having a similar problem, some things failing then it just hangs on a black screen (after pressing F1 to see what it's doing). My system is a Toshiba Satellite L75D-A7280, and I have tried two different files that I had downloaded from Ubuntu.com with no luck. My laptop came with Windows 8 and following the dual-boot directions Ubuntu 12.04 was installed and worked perfect, except for completely obliterating Windows 8 and voiding my warranty, meaning neither Toshiba nor the retail center I bought my system from can help me legally, besides giving advice... nor can I return it to get a non-UEFI based system. I really need to figure this out, I am a student and need my laptop working properly with any OS I put on it. Will continue searching for any information I can find.

    Read the article

  • Best Practice to return responses from service

    - by A9S6
    I am writing a SOAP based ASP.NET Web Service having a number of methods to deal with Client objects. e.g: int AddClient(Client c) = returns Client ID when successful List GetClients() Client GetClientInfo(int clientId) In the above methods, the return value/object for each method corresponds to the "all good" scenario i.e. A client Id will be returned if AddClient was successful or a List< of Client objects will be returned by GetClients. But what if an error occurs, how do I convey the error message to the caller? I was thinking of having a Response class: Response { StatusCode, StatusMessage, Details } where Details will hold the actual response but in that case the caller will have to cast the response every time. What are your views on the above? Is there a better solution? ---------- UPDATED ----------- Is there something new in WCF for the above? What difference will it make If I change the ASP.NET Web Service to a WCF Service?

    Read the article

  • Given two sets of DNA, what does it take to computationally "grow" that person from a fertilised egg and see what they become? [closed]

    - by Nicholas Hill
    My question is essentially entirely in the title, but let me add some points to prevent some "why on earth would you want to do that" sort of answers: This is more of a mind experiment than an attempt to implement real software. For fun. Don't worry about computational speed or the number of available memory bytes. Computers get faster and better all of the time. Imagine we have two data files: Mother.dna and Father.dna. What else would be required? (Bonus point for someone who tells me approx how many GB each file will be, and if the size of the files are exactly the same number of bytes for everyone alive on Earth!) There would ideally need to be a way to see what the egg becomes as it becomes a human adult. If you fancy, feel free to outline the design. I am initially thinking that there'd need to be some sort of volumetric voxel-based 3D environment for simulation purposes.

    Read the article

  • Is there anything in .NET that allows me to define a grammar and generate a programming language?

    - by user1525474
    I have a course in which the proffesor has asked us to create a DSL for a our final project. He presented us in the first courses xText with Eclipse. This being a new course, I am still a bit fuzzy on what Domain Specific Languages means. This is my current understanding: a domain specific language is a language that is created for specific problems in software development. Examples of DSL's are PHP, SQL, JavaScript and on the opposite are languages like Java , C# , C++ , Ruby etc. Please feel free to correct me if I am wrong. What I would like to know: is there is any tool for .NET/Visual Studio that is similar to Xtext, that allows me to define a grammar and be allowed to generate a programming language based on that with an activity diagram?

    Read the article

  • General usage question of vbo

    - by CSharpie
    Firstofall, I am sorry if my question is to broad. I am developing a tile based game and switched from those gl.Begin calls to using VBOs. This is kind of working allready, I managed to render a hexagonal polygon with a simple shader applied. What I am not sure is, how to implement the "whole" tile concept. Concrete the questions are: - Is it better to create 1 VBO for a single tile and render it n-Times in every different position, or render one huge VBO that represents the whole "world" - Depending on the answer above, what is the best way to draw a "linegrid". Overlay with the same vbo using the respecting polygon.mode , or is there a way to let the shader to this? - How would frustum-culling or mousepicking work then, do i need to keep the VBO-data in memory?

    Read the article

  • ASP.NET Combo Box and List Box Performance Improvements - v2010 vol 1

    Check out this great new performance feature of our ASP.NET combo box and list box controls for the DXperience v2010.1 release. You can now manually populate lists with items based on the currently applied filter criteria. This means that you can significantly decrease web server workload by loading only a subset of all items when working with large datasets. For instance, when using a large data source, you can only request a few records to be visible on the screen. The rest of the items can...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • From HttpRuntime.Cache to Windows Azure Caching (Preview)

    - by Jeff
    I don’t know about you, but the announcement of Windows Azure Caching (Preview) (yes, the parentheses are apparently part of the interim name) made me a lot more excited about using Azure. Why? Because one of the great performance tricks of any Web app is to cache frequently used data in memory, so it doesn’t have to hit the database, a service, or whatever. When you run your Web app on one box, HttpRuntime.Cache is a sweet and stupid-simple solution. Somewhere in the data fetching pieces of your app, you can see if an object is available in cache, and return that instead of hitting the data store. I did this quite a bit in POP Forums, and it dramatically cuts down on the database chatter. The problem is that it falls apart if you run the app on many servers, in a Web farm, where one server may initiate a change to that data, and the others will have no knowledge of the change, making it stale. Of course, if you have the infrastructure to do so, you can use something like memcached or AppFabric to do a distributed cache, and achieve the caching flavor you desire. You could do the same thing in Azure before, but it would cost more because you’d need to pay for another role or VM or something to host the cache. Now, you can use a portion of the memory from each instance of a Web role to act as that cache, with no additional cost. That’s huge. So if you’re using a percentage of memory that comes out to 100 MB, and you have three instances running, that’s 300 MB available for caching. For the uninitiated, a Web role in Azure is essentially a VM that runs a Web app (worker roles are the same idea, only without the IIS part). You can spin up many instances of the role, and traffic is load balanced to the various instances. It’s like adding or removing servers to a Web farm all willy-nilly and at your discretion, and it’s what the cloud is all about. I’d say it’s my favorite thing about Windows Azure. The slightly annoying thing about developing for a Web role in Azure is that the local emulator that’s launched by Visual Studio is a little on the slow side. If you’re used to using the built-in Web server, you’re used to building and then alt-tabbing to your browser and refreshing a page. If you’re just changing an MVC view, you’re not even doing the building part. Spinning up the simulated Azure environment is too slow for this, but ideally you want to code your app to use this fantastic distributed cache mechanism. So first off, here’s the link to the page showing how to code using the caching feature. If you’re used to using HttpRuntime.Cache, this should be pretty familiar to you. Let’s say that you want to use the Azure cache preview when you’re running in Azure, but HttpRuntime.Cache if you’re running local, or in a regular IIS server environment. Through the magic of dependency injection, we can get there pretty quickly. First, design an interface to handle the cache insertion, fetching and removal. Mine looks like this: public interface ICacheProvider {     void Add(string key, object item, int duration);     T Get<T>(string key) where T : class;     void Remove(string key); } Now we’ll create two implementations of this interface… one for Azure cache, one for HttpRuntime: public class AzureCacheProvider : ICacheProvider {     public AzureCacheProvider()     {         _cache = new DataCache("default"); // in Microsoft.ApplicationServer.Caching, see how-to      }         private readonly DataCache _cache;     public void Add(string key, object item, int duration)     {         _cache.Add(key, item, new TimeSpan(0, 0, 0, 0, duration));     }     public T Get<T>(string key) where T : class     {         return _cache.Get(key) as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } public class LocalCacheProvider : ICacheProvider {     public LocalCacheProvider()     {         _cache = HttpRuntime.Cache;     }     private readonly System.Web.Caching.Cache _cache;     public void Add(string key, object item, int duration)     {         _cache.Insert(key, item, null, DateTime.UtcNow.AddMilliseconds(duration), System.Web.Caching.Cache.NoSlidingExpiration);     }     public T Get<T>(string key) where T : class     {         return _cache[key] as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } Feel free to expand these to use whatever cache features you want. I’m not going to go over dependency injection here, but I assume that if you’re using ASP.NET MVC, you’re using it. Somewhere in your app, you set up the DI container that resolves interfaces to concrete implementations (Ninject call is a “kernel” instead of a container). For this example, I’ll show you how StructureMap does it. It uses a convention based scheme, where if you need to get an instance of IFoo, it looks for a class named Foo. You can also do this mapping explicitly. The initialization of the container looks something like this: ObjectFactory.Initialize(x =>             {                 x.Scan(scan =>                         {                             scan.AssembliesFromApplicationBaseDirectory();                             scan.WithDefaultConventions();                         });                 if (Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.IsAvailable)                     x.For<ICacheProvider>().Use<AzureCacheProvider>();                 else                     x.For<ICacheProvider>().Use<LocalCacheProvider>();             }); If you use Ninject or Windsor or something else, that’s OK. Conceptually they’re all about the same. The important part is the conditional statement that checks to see if the app is running in Azure. If it is, it maps ICacheProvider to AzureCacheProvider, otherwise it maps to LocalCacheProvider. Now when a request comes into your MVC app, and the chain of dependency resolution occurs, you can see to it that the right caching code is called. A typical design may have a call stack that goes: Controller –> BusinessLogicClass –> Repository. Let’s say your repository class looks like this: public class MyRepo : IMyRepo {     public MyRepo(ICacheProvider cacheProvider)     {         _context = new MyDataContext();         _cache = cacheProvider;     }     private readonly MyDataContext _context;     private readonly ICacheProvider _cache;     public SomeType Get(int someTypeID)     {         var key = "somename-" + someTypeID;         var cachedObject = _cache.Get<SomeType>(key);         if (cachedObject != null)         {             _context.SomeTypes.Attach(cachedObject);             return cachedObject;         }         var someType = _context.SomeTypes.SingleOrDefault(p => p.SomeTypeID == someTypeID);         _cache.Add(key, someType, 60000);         return someType;     } ... // more stuff to update, delete or whatever, being sure to remove // from cache when you do so  When the DI container gets an instance of the repo, it passes an instance of ICacheProvider to the constructor, which in this case will be whatever implementation was specified when the container was initialized. The Get method first tries to hit the cache, and of course doesn’t care what the underlying implementation is, Azure, HttpRuntime, or otherwise. If it finds the object, it returns it right then. If not, it hits the database (this example is using Entity Framework), and inserts the object into the cache before returning it. The important thing not pictured here is that other methods in the repo class will construct the key for the cached object, in this case “somename-“ plus the ID of the object, and then remove it from cache, in any method that alters or deletes the object. That way, no matter what instance of the role is processing the request, it won’t find the object if it has been made stale, that is, updated or outright deleted, forcing it to attempt to hit the database. So is this good technique? Well, sort of. It depends on how you use it, and what your testing looks like around it. Because of differences in behavior and execution of the two caching providers, for example, you could see some strange errors. For example, I immediately got an error indicating there was no parameterless constructor for an MVC controller, because the DI resolver failed to create instances for the dependencies it had. In reality, the NuGet packaged DI resolver for StructureMap was eating an exception thrown by the Azure components that said my configuration, outlined in that how-to article, was wrong. That error wouldn’t occur when using the HttpRuntime. That’s something a lot of people debate about using different components like that, and how you configure them. I kinda hate XML config files, and like the idea of the code-based approach above, but you should be darn sure that your unit and integration testing can account for the differences.

    Read the article

  • Visual Studio 2010 RC and Entity Framework 4 RC Support in the New Version of ADO.NET Data Providers

    Devart has recently announced the release of dotConnect products for Oracle, MySQL, PostgreSQL, and SQLite - ADO.NET providers that offer Entity Framework support, LINQ to SQL support, and contain an ORM model designer for developing LINQ to SQL and EF models based on different database engines. New dotConnect ADO.NET providers offer complete support for Visual Studio 2010 Release Candidate and Entity Framework 4 Release Candidate. Entity Developer 2.80, a designer for modeling and code generation...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Getting Started with Oracle Fusion CRM Sales

    Designed from the ground-up using the latest technology advances and incorporating the best practices gathered from Oracle's thousands of customers, Fusion Applications are 100 percent open standards-based business applications that set a new standard for the way we innovate, work and adopt technology. Delivered as a complete suite of modular applications, Fusion Applications work with your existing portfolio to evolve your business to a new level of performance. In this AppCast, part of a special series on Fusion Applications, you hear about the unique advantages of Fusion CRM Sales, learn about the scope of the first release and discover how Fusion CRM Sales modules can be used to complement and enhance your existing sales solutions.

    Read the article

  • Handle php out of memory error

    - by PeterMmm
    I have a Drupal based web site on a relative small vserver (512MB RAM). Recently the website begins to return php out of memory messages like this: Fatal error: Out of memory (allocated 17039360) (tried to allocate 77824 bytes) in /home/... All php.ini memory limit parameters are set to off (-1). Propably the website has gained of complexity, content, etc. But I cannot interpret fine that message: Does that mean that the whole request has allocated 17MB(?) right now and cannot get 7MB(?) more from the OS. Has the web server spend all memory or has the OS no more memory to allocate ? I'm not shure if the memory overhead is coming from the web server or another service, because when I get the out-of-memoy message I can't get into the server with ssh. After a while all runs fine again.

    Read the article

  • How to install Syngen?

    - by inLoveWithPython
    I'm using Ubuntu 12.04. I found out that there is a near-obsolete program based on the obsolete Caml Special Light by someone at INRIA ( ftp.inria.fr, directory lang/caml-light ). The program is called Syngen. I need it to create Syntax diagrams. But i am not able to run the binaries that come with it, and i'm not able to compile the source because it needs a compiler called cmlc. Somebody help, i find REALLY less documentation about this in internet.

    Read the article

  • Pending and Approval process

    - by zen
    So let's say I have a DB table with 8 columns, one is a unique auto-incrementing used as ID. So I have a page that pulls in the info for each row based on query string ID. I want to give my users the ability to propose changes. Kinda like a wiki setup. So I was thinking I should just have another duplicate table or maybe database altogether (without the auto-incrementing column and maybe with a date edited column) that keeps all proposed changes in queue and then when I approve them, the script can move the row from the proposed DB to the real DB. Does this sound good or is there a better process for this?

    Read the article

  • Convert Microsoft Word documents (.doc/x) into HTML files

    - by danie7L T
    Does anybody knows of a good application to get it done quickly and efficiently ? I bought Word Cleaner but the results are merely sufficient and I need go over all the generated html files to clean tons of useless injected tags like <strongH</strong<strongell</strong<strongo </strong<emWor</em<emld</em Most of the articles displayed on a website I manage are based on documents written on MS Word by people how has little idea of what are margins for or ordered/unordered lists, foot/end notes etc and I cannot make them use something else. Does anyone has a tip to help me handle those pages more efficiently than going over them to correct and apply my CSS style ? NB: Just for the record, using "Save as HTML DOC" in Word is faaar worst than Word cleaner

    Read the article

  • How long till HTML5 canvas becomes a viable game development platform?

    - by Shouvik
    So I have been working on web application. So invariably what it boils down to is making simple games which were previously based on flash or openGL. Now I know apple was moving away from flash because its proprietary unlike their stance that its got "pathetic performance"! Not true, try playing a canvas game, I can assure you at any point of time (including when its idle) it will use up a fair bit of processing power just to redraw the UI. Now I do understand that this is my fault because when the game is not active I should not be redrawing the canvas, but honestly its a lot of work and I suppose there should be libraries which should be able to assist me with that! So, how much will it be before I see a decent canvas library which handles these "tiny" issues for me? I can't honestly expect Steve Jobs to be doing anything more for HTML5! I someone knows of a good library, I am all ears...! :) PS: I use mootools and am presently using Mootools Canvas Library.

    Read the article

  • Is ubuntu-geoip (GeoClue) is used for tracking?

    - by tijybba
    I am happily learning Ubuntu more closely now. I came across the process ubuntu-geoip-provider in system monitor. Is is used for tracking or for gathering nearest server info, or for syncing time with Internet, or perhaps for all these things? I searched for it but not enough information came through. If it is tracking, what kind of info it is gathering, and why it is doing that? It is based here: /usr/lib/ubuntu-geoip I just wanted more detailed information for that. Also, can this be disabled? Is disabling recommended, or would doing so cause dependency-related (or other) problems?

    Read the article

  • Wrong download of Ubuntu 13.10 desktop: AMD instead of wanted Intel [duplicate]

    - by L. Williams
    This question already has an answer here: Difference between the i386 download and the amd64? 5 answers My PCs are Intel CPUs e.g. Core2 Quad, 64 bit, no AMD in network, but from ubuntu.com/download site, selecting 13.10 Desktop for 64 bit, it repeatedly only offers *AMD.ISO version, which of course fails to install on my Intel (or Atom) based PCs. Wuzzup, and what URL has a download for the Intel CPU systems? Rem: this is for Saucy 13.10 Desktop 64 OS ISO. TIA.

    Read the article

< Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >