Search Results

Search found 9983 results on 400 pages for 'fuzzy c means'.

Page 169/400 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • Updating password hashing without forcing a new password for existing users

    - by Willem
    You maintain an existing application with an established user base. Over time it is decided that the current password hashing technique is outdated and needs to be upgraded. Furthermore, for UX reasons, you don't want existing users to be forced to update their password. The whole password hashing update needs to happen behind the screen. Assume a 'simplistic' database model for users that contains: ID Email Password How does one go around to solving such a requirement? My current thoughts are: create a new hashing method in the appropriate class update the user table in the database to hold an additional password field Once a user successfully logs in using the outdated password hash, fill the second password field with the updated hash This leaves me with the problem that I cannot reasonable differentiate between users who have and those who have not updated their password hash and thus will be forced to check both. This seems horribly flawed. Furthermore this basically means that the old hashing technique could be forced to stay indefinitely until every single user has updated their password. Only at that moment could I start removing the old hashing check and remove the superfluous database field. I'm mainly looking for some design tips here, since my current 'solution' is dirty, incomplete and what not, but if actual code is required to describe a possible solution, feel free to use any language.

    Read the article

  • The Science Behind Salty Airline Food

    - by Jason Fitzpatrick
    In this collection, Artist Signe Emma combines a scientific overview of the role salt plays in airline food with electron microscope scans of salt crystals arranged to look like the views from an airplane–a rather clever and visually stunning way to deliver the message. Attached to the collection is this explaination of why airlines load their snacks and meals with salt: White noise consists of a random collection of sounds at different frequencies and scientists have demonstrated that it is capable of diminishing the taste of salt. At low-pressure conditions, higher taste and odour thresholds of flavourings are generally observed. At 30.000 feet the cabin humidity drops by 15%, and the lowered air pressure forces bodily fluids upwards. With less humidity, people have less moisture in their throat, which slows the transport of odours to the brains smell and taste receptors. That means that if a meal should taste the same up in the air, as on ground it needs 30% of extra salt. To combat the double assault on our sense of taste, the airlines boost the salt content to compensate. For more neat microscope scans as high-altitude view photographs, hit up the link below. How to Play Classic Arcade Games On Your PC How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8

    Read the article

  • SFTP permission denied on files owned by www-data

    - by Charles Roper
    I have a pretty standard server set up running Apache and PHP. An app I am running creates files and these are owned by the Apache user www-data. Files that I upload via SFTP are owned by my own user charlesr. All files are part of the www-data group. My problem is that I cannot modify or overwrite any of the files via SFTP which are owned by www-data, even though charlesr is part of the www-data group. I can modify the files no problem via a SSH session. So I'm not sure what to do. How do I give my SFTP session permissions to modify www-data owned files? For a bit of background, these are the notes I wrote for myself when setting-up the server: Now set up permissions on `/var/www` where your files are served from by default: $ sudo adduser $USER www-data $ sudo chgrp -R www-data /var/www $ sudo chmod -R g+rw /var/www $ sudo chmod -R g+s /var/www Now log out and log in again to make the changes take hold. The previous set of commands does the following: 1. adds the current user ($USER) to the `www-data` group; 2. changes `/var/www` to belong to the `www-data` group; 3. adds read/write permissions to the group that `/var/www` belongs to; 4. sets the SGID bit on `/var/www`; this final point bears some explaining. And then I go on to explain to myself what setting the SGID bit means (i.e. all files created in /var/www become part of the www-data group automatically). Btw, nothing feels sweeter than going back and reading your own detailed notes on the what, how and why of your own server set up when trying to troubleshoot like this - I recommend it highly to all beginners like myself :-)

    Read the article

  • Implmenting RLE into a tilemap or how to create a large 3D array?

    - by Smallbro
    Currently I've been using a 3D array for my tiles in a 2D world but the 3D side comes in when moving down into caves and whatnot. Now this is not memory efficient and I switched over to a 2D array and can now have much larger maps. The only issue I'm having now is that it seems that my tiles cannot occupy the same space as a tile on the same z level. My current structure means that each block has its own z variable. This is what it used to look like: map.blockData[x][y][z] = new Block(); however now it works like this map.blockData[x][y] = new Block(z); I'm not sure why but if I decide to use the same space on say the floor below it wont allow me to. Does anyone have any ideas on how I can add a z-axis to my 2D array? I'm using java but I reckon the concept carries across different languages. Edit: As Will posted, RLE sounds like the best method for achieving a fast 3D array. However I'm struggling to understand how I would even start to implement it? Would I create a 4D array the 4th being something which controls how many to skip? Or would the x-axis simply change altogether and have large gaps in between - for example [5][y][z] would skip 5 tiles? Is there something really obvious here which I am missing? The number of z levels I'm trying to have is around 66, it would be preferably that I can have up to or more than 1000 in x and y.

    Read the article

  • Is micro-optimisation important when coding?

    - by BozKay
    I recently asked a question on stackoverflow.com to find out why isset() was faster than strlen() in php. This raised questions around the importance of readable code and whether performance improvements of micro-seconds in code were worth even considering. My father is a retired programmer, I showed him the responses and he was absolutely certain that if a coder does not consider performance in their code even at the micro level, they are not good programmers. I'm not so sure - perhaps the increase in computing power means we no longer have to consider these kind of micro-performance improvements? Perhaps this kind of considering is up to the people who write the actual language code? (of php in the above case). The environmental factors could be important - the internet consumes 10% of the worlds energy, I wonder how wasteful a few micro-seconds of code is when replicated trillions of times on millions of websites? I'd like to know answers preferably based on facts about programming. Is micro-optimisation important when coding? EDIT : My personal summary of 25 answers, thanks to all. Sometimes we need to really worry about micro-optimisations, but only in very rare circumstances. Reliability and readability are far more important in the majority of cases. However, considering micro-optimisation from time to time doesn't hurt. A basic understanding can help us not to make obvious bad choices when coding such as if (expensiveFunction() && counter < X) Should be if (counter < X && expensiveFunction()) (example from @zidarsk8) This could be an inexpensive function and therefore changing the code would be micro-optimisation. But, with a basic understanding, you would not have to because you would write it correctly in the first place.

    Read the article

  • Software design of a browser-based strategic MMO game

    - by Mehran
    I wonder if there are any known tested software designs for Travian-like browser-based strategic MMO games? I mean how would they implement the server of such games or what is stored in database and what is stored in RAM? Is the state of the world stored in one piece or is it distributed among a number of storage? Does anyone know a resource to study the problems and solutions of creating such games? [UPDATE] Suggested in comments, I'm going to give an example how would I design such a project. Even though I'm not sure if I'm proposing the right one. Having stored the world state in a MongoDB, I would implement an event collection in which all the changes to the world will register. Changes that are meant to happen in the future will come with an action date set to the future and those that are to be carried out immediately will be set to now. Having this datastore as the central point of the system, players will issue their actions as events inserted in datastore. At the other end of the system, I'll have a constant-running software taking out events out of the datastore which are due to be carried out and not done yet. Executing an event means apply some update on the world's state and thus the datastore. As scalable as this design sounds, I'm not sure if it will be worth implementing. For one, it is pointless to cache the datastore as most of updates happen once without any follow ups. For instance if you have the growth of resources in your game, you'll be updating the whole world state periodically in which case, having incorporated a cache, you are keeping the whole world in RAM (which most likely is impossible). So can someone come up with a better design?

    Read the article

  • Compiz slow under proprietary nvidia driver

    - by gsedej
    Hi! I am using Ubuntu 10.10 and have problem with proprietary nvidia driver for my GeForce GTS 250. I have issue with poor Compiz performance. And there is also open-source "noueau" driver. Proprietary: I tried many versions but neither works fast on desktop. This means 30 FPS without heavy effects. Currently I am using version 270.18. Even normal desktop use feels bad (moving windows) In games (and 3D benchmark) it is really good! (Unigine Heaven works good!) Open-source "nouveau": Very fast on desktop with heavy effects (blur, ...). I have 300 FPS and more, even in Expo mode. Games were good but not as good as prop. And driver causes xorg to crash even the latest (ppa:xorg-edgers/nouveau), so I switched back to proprietary. I also have computer with Ubuntu 10.04, GeForce 8600GT and drivers around 185.x and Compiz works great there. There is similar question Nvidia proprietary driver performance in 10.10 Which version of nvidia (prop) driver is fast in Compiz in Ubuntu 10.10? How do you install a specific version of nvidia driver? Is it the case that each newer driver works slower on compiz?

    Read the article

  • Windows Azure and Server App Fabric &ndash; kinsmen or distant relatives?

    - by kaleidoscope
    Technorati Tags: tinu,windows azure,windows server,app fabric,caching windows azure If you are into Windows Azure then it would be rather demeaning to ask if you are aware of Windows Azure App Fabric. Just in case you are not - Windows Azure App Fabric provides a secure connectivity service by means of which developers can build distributed applications as well as services that work across network and organizational boundaries in the cloud. But some of you may have heard of another similar term floating around forums and blog posts - Windows Server App Fabric. The momentary déjà vu that you might have felt upon encountering it is not unheard of in the Cloud Computing circles - http://social.msdn.microsoft.com/Forums/en/netservices/thread/5ad4bf92-6afb-4ede-b4a8-6c2bcf8f2f3f http://forums.virtualizationtimes.com/session-state-management-using-windows-server-app-fabric Many have fallen prey to this ambiguous nomenclature but its not without a purpose. First announced at PDC 2009, Windows Server AppFabric is a set of application services focused on improving the speed, scale, and management of Web, Composite, and Enterprise applications. Initially codenamed Dublin the app fabric (oops....Windows Server App Fabric) provides add-ons like Monitoring,Tracking and Persistence into your hosted Workflow and Services without the Developer worried about these Functionalities. Alongwith this it also provides Distributed In-Memory caching features from Velocity caching. In short it is a healthy equivalent of Windows Azure App Fabric minus the cloud part. So why bring this up while talking about Windows Azure? Well, apart from their similar last names these powers are soon to be combined if Microsoft's roadmap is to be believed - "Together, Windows Server AppFabric and Windows Azure platform AppFabric provide a comprehensive set of services that help developers rapidly develop new applications spanning Windows Azure and Windows Server, and which also interoperate with other industry platforms such as Java, Ruby, and PHP." One of the most powerful features of the Windows Server App Fabric is its distributed caching mechanism which if appropriately leveraged with the Windows Azure App Fabric could very well mean a revolution in the Session Management techniques for the Azure platform. Well Microsoft, we do have our fingers crossed..... Read on... http://blogs.technet.com/windowsserver/archive/2010/03/01/windows-server-appfabric-beta-2-available.aspx

    Read the article

  • MVC + 3 tier; where ViewModels come into play?

    - by mikhairu
    I'm designing a 3-tiered application using ASP.NET MVC 4. I used the following resources as a reference. CodeProject: MVC + N-tier + Entity Framework Separating data access in ASP.NET MVC I have the following desingn so far. Presentation Layer (PL) (main MVC project, where M of MVC was moved to Data Access Layer): MyProjectName.Main Views/ Controllers/ ... Business Logic Layer (BLL): MyProjectName.BLL ViewModels/ ProjectServices/ ... Data Access Layer (DAL): MyProjectName.DAL Models/ Repositories.EF/ Repositories.Dapper/ ... Now, PL references BLL and BLL references DAL. This way lower layer does not depend on the one above it. In this design PL invokes a service of the BLL. PL can pass a View Model to BLL and BLL can pass a View Model back to PL. Also, BLL invokes DAL layer and DAL layer can return a Model back to BLL. BLL can in turn build a View Model and return it to PL. Up to now this pattern was working for me. However, I've ran into a problem where some of my ViewModels require joins on several entities. In the plain MVC approach, in the controller I used a LINQ query to do joins and then select new MyViewModel(){ ... }. But now, in the DAL I do not have access to where ViewModels are defined (in the BLL). This means I cannot do joins in DAL and return it to BLL. It seems I have to do separate queries in DAL (instead of joins in one query) and BLL would then use the result of these to build a ViewModel. This is very inconvenient, but I don't think I should be exposing DAL to ViewModels. Any ideas how I can solve this dilemma? Thanks.

    Read the article

  • How can I making Twitter, Facebook and Reddit share buttons load last?

    - by Daniel Bingham
    I have a website with a number of pages that sport twitter, facebook and reddit share buttons. They take forever to load and until they do the rest of the page doesn't load. So how I can make them load last? Currently, they are loaded something like this: <div class="item"><a href="http://twitter.com/share" class="twitter-share-button" data-count="vertical" data-via="FridgeToFood" data-related="danielBingham:Recipe and update tweets from Fridge to Food.">Tweet</a><script type="text/javascript" src="http://platform.twitter.com/widgets.js"></script></div> <div class="item"><script src="http://connect.facebook.net/en_US/all.js#xfbml=1"></script><fb:like layout="box_count" width="40"></fb:like></div> <div class="item"> <script type="text/javascript">reddit_target='recipes';</script> <script type="text/javascript" src="http://reddit.com/static/button/button2.js"></script> </div> They are in a div called "shareWrapper" and are loading to one side of the page. The buttons load where ever the script code is placed. As far as I know, I can't place the script code at the bottom of the page and move the resulting buttons after the fact. I want them to appear near the top, which right now means they are stopping everything below them from loading for several seconds. I tried loading them using javascript, but using JQuery's $(document).ready(), but that failed. It seems to leave the page in some sort of loading loop from which it never emerges. Are there other ways to get these to load last?

    Read the article

  • Update Your NetBeans Plugin's "Supported NetBeans Versions" In The Next Two Weeks!

    - by Geertjan
    For each NetBeans plugin uploaded to the NetBeans Plugin Portal, the registration page starts like this: Note how the "Supported NetBeans Versions" field is empty, i.e., no checkbox is checked, for the plugin above. As you can also see, there is a red asterisk next to this field, which means it is mandatory. It is mandatory for the latest version of the NetBeans Plugin Portal, while it wasn't mandatory before, so that several plugins were registered without their supported version being set. Therefore, since the version is now mandatory, anyone who doesn't want their plugin to be hidden for the rest of this year, and removed on 1 January 2013 if no one complains about their absence, needs to go to their plugin's registration page and set a NetBeans Version. E-mails have been sent to plugin developers of unversioned plugins already, over the last weeks. Currently there are 91 plugins that still need to have their NetBeans Version set. Probably at least 1/3 of those are my own plugins, so this is as much a reminder to myself as anyone else! Whether or not you have received an e-mail asking you to set a NetBeans Version for your plugins, please take a quick look anyway and maybe this is a good opportunity to update other information relating to your plugin. You (and I) have two weeks: on Monday 16 April, any NetBeans plugin in the Plugin Portal without a NetBeans Version will be hidden. And then removed, at the start of next year, if no one complains.

    Read the article

  • How do I re-enable the backlight?

    - by Scott Severance
    Since Oneiric, if I leave my machine (HP Mini 110 netbook) unattended and it goes into power-save mode, the backlight gets disabled. How can I turn it back on? Note that the keyboard backlight controls (Fn+F4 and Fn+F3) don't have any effect in this situation. I've already filed a bug, but filing a bug doesn't fix my problem. I tried this workaround posted in this bug report dealing with Acer laptops: sudo setpci -s 00:02.0 F4.B=0 However, if anything, that command makes things worse. In the general case, I can see a little bit if I'm in a dark room with a flashlight aimed just so. But after running setpci I can't see anything. And I find the setpci documentation to be utterly incomprehensible, so I don't know whether I need to tweak my command somehow or whether I'm completely barking up the wrong tree. Update: I've found a workaround: I'm now booting with the kernel parameter acpi=off. This disables power management, which prevents the machine from going into power saving mode and thus failing to come back up correctly. Of course, not having power management means that I can't use suspend or do anything to manage power other than powering it off (even then, I have to manually use the power switch). Also, it prevents me from using Unity 3D or Gnome Shell, forcing me into Unity 2C or Gnome Classic. So, I'd really like to be able to stop using this hack.

    Read the article

  • How to manage long running background threads and report progress with DDD

    - by Mr Happy
    Title says most of it. I have found surprising little information about this. I have a long running operation of which the user wants to see the progress (as in, item x of y processed). I also need to be able to pause and stop the operation. (Stopping doesn't rollback the items already processed.) The thing is, it's not that each item takes a long time to get processed, it's that that there are usually a lot of items. And what I've read about so far is that it's somewhat of an anti-pattern to put something like a queue in the DB. I currently don't have any messaging system in place, and I've never worked with one either. Another thing I read somewhere is that progress reporting is something that belongs in the application layer, but it didn't go into the details. So having said all this, what I have in mind is the following. User request with list of items enters the application layer. Application layer gets some information from the domain needed to process the items. Application layer passes the items and the information off to some domain service (should the implementation of this service belong in the infrastructure layer?) This service spins up a worker thread with callbacks for both progress reporting and pausing/stopping it. This worker thread will process each item in it's own UoW. This means the domain information from earlier needs to be stored in some DTO. Since nothing is really persisted, the service should be singleton and thread safe Whenever a user requests a progress report or wants to pause/stop the operation, the application layer will ask the service. Would this be a correct solution? Or am I at least on the right track with this? Especially the singleton and thread safe part makes the whole thing feel icky.

    Read the article

  • VLC package dependencies cannot be resolved

    - by flop
    When I try to install VLC media player this pops up: The following packages have unmet dependencies: vlc: Depends: vlc-nox (= 2.0.3-0ubuntu0.12.04.1) but 2.0.3-0ubuntu0.12.04.1 is to be installed Depends: libavcodec-extra-53 (>= 4:0.8-1~) but 4:0.8.3ubuntu0.12.04.1 is to be installed Depends: libavutil-extra-51 (>= 4:0.8-1~) but 4:0.8.3ubuntu0.12.04.1 is to be installed Depends: libc6 (>= 2.15) but 2.15-0ubuntu10 is to be installed Depends: libfreetype6 (>= 2.2.1) but 2.4.8-1ubuntu2 is to be installed Depends: libgcc1 (>= 1:4.1.1) but 1:4.6.3-1ubuntu5 is to be installed Depends: libqtcore4 (>= 4:4.8.0) but 4:4.8.1-0ubuntu4.2 is to be installed Depends: libqtgui4 (>= 4:4.7.0~beta1) but 4:4.8.1-0ubuntu4.2 is to be installed Depends: libstdc++6 (>= 4.6) but 4.6.3-1ubuntu5 is to be installed Depends: libva-x11-1 (> 1.0.15~) but it is not going to be installed Depends: libva1 (> 1.0.15~) but it is not going to be installed Depends: libxcb-composite0 but it is not going to be installed Depends: libxcb-randr0 (>= 1.1) but it is not going to be installed Depends: libxcb-xv0 (>= 1.2) but it is not going to be installed Depends: zlib1g (>= 1:1.2.3.3.dfsg) but 1:1.2.3.4.dfsg-3ubuntu4 is to be installed Can someone please tell me what this means? How can I get around it? (if possible) P.S. I am also receiving similar pop ups when trying to install most programs. I am using Ubuntu 12.04 64-bit. Any help will be much appreciated.

    Read the article

  • Managing the Domino Effect (with Tutor Publisher Reports)

    - by [email protected]
    When an organization upgrades their business application or improves a process, it triggers changes that will reverberate throughout an organization, like a falling row of dominoes standing on end. A tangible and repeatable way to communicate change is with updated process documentation. But how do organizations get their arms around all the documents that are impacted by an application upgrade or process improvement? A small change in one place will trigger subsequent changes in other areas. A simple domino chain of questions can go like this. What screens have changed? Do the new screens change the process in place? In what procedural documents are the screens referenced? Who uses the screens and must be notified of the changes? What other documents are affected? Will the change affect current company policy? Tutor Publisher compiles focused, easy to read impact analysis reports of your process documentation library that answer these tough questions. Tutor reports make it easy to quickly target the information and documents that require updating. In turn, the updated documents are used to communicate the change. The Tutor writing methodology and Publisher reports provide organizations the means to confidently keep documentation in sync with the way the business runs. Start managing the domino effect in your organization. Get a grip on it here!

    Read the article

  • Why isn't there a typeclass for functions?

    - by Steve314
    I already tried this on Reddit, but there's no sign of a response - maybe it's the wrong place, maybe I'm too impatient. Anyway... In a learning problem I've been messing around with, I realised I needed a typeclass for functions with operations for applying, composing etc. Reasons... It can be convenient to treat a representation of a function as if it were the function itself, so that applying the function implicitly uses an interpreter, and composing functions derives a new description. Once you have a typeclass for functions, you can have derived typeclasses for special kinds of functions - in my case, I want invertible functions. For example, functions that apply integer offsets could be represented by an ADT containing an integer. Applying those functions just means adding the integer. Composition is implemented by adding the wrapped integers. The inverse function has the integer negated. The identity function wraps zero. The constant function cannot be provided because there's no suitable representation for it. Of course it doesn't need to spell things as if it the values were genuine Haskell functions, but once I had the idea, I thought a library like that must already exist and maybe even using the standard spellings. But I can't find such a typeclass in the Haskell library. I found the Data.Function module, but there's no typeclass - just some common functions that are also available from the Prelude. So - why isn't there a typeclass for functions? Is it "just because there isn't" or "because it's not so useful as you think"? Or maybe there's a fundamental problem with the idea? The biggest possible problem I've thought of so far is that function application on actual functions would probably have to be special-cased by the compiler to avoid a looping problem - in order to apply this function I need to apply the function application function, and to do that I need to call the function application function, and to do that...

    Read the article

  • Pushing complete notifications to client

    - by ton.yeung
    So with cqrs, we accept that consistency is eventual. However, that doesn't mean that the user has to continually poll, or that eventual means an update has to take more then 500ms to sync. For the sake of UX, we want to at least give the illusion of consistency, or if not possible, be as transparent as possible. With that in mind, I have this setup: angularjs web client, consumes webapi restful services, sends commands to nservicebus command handlers, saves to neventstore, dispatches events to nservicebus event handlers, sends message to signalr hub, sends notifications to angularjs web client so with that setup, theoretically some initiates a request the server validates the request sends out the necessary commands In the mean time the client gets a 200 response updates the view: working on it gets message sometime later: done, here's the updated data Here's where things get interesting, each command could spawn multiple events. Not sure if this is a serious no, no, or not, but that's how it is currently. For example, a new customer spawns CustomerIDCreated, CustomerNameUpdated, CustomerAddressUpdated, etc... Which event handler needs to notify the client? Should all of them in a progress bar style update?

    Read the article

  • Disable pasting in a textbox using jQuery

    - by Michel Grootjans
    I had fun writing this one My current client asked me to allow users to paste text into textboxes/textareas, but that the pasted text should be cleaned from '<...>' tags. Here's what we came up with: $(":input").bind('paste', function(e) { var el = $(this); setTimeout(function() { var text = $(el).val(); $(el).val(text.replace(/<(.*?)>/gi, '')); }, 100); }) ; This is so simple, I'm amazed. The first part just binds a function to the paste operation applied to any input  declared on the page. $(":input").bind('paste', function(e) {...}); In the first line, I just capture the element. Then wait for 100ms setTimeout(function() {....}, 100); then get the actual value from the textbox, and replace it with a regular expression that basically means replace everything that looks like '<{0}>' with ''. gi at the end are regex arguments in javascript. /<(.*?)>/gi

    Read the article

  • Steps to send patch to Launchpad

    - by Alois Mahdal
    With a Git/Github background and knowing very little about Bazaar VCS, I would like to occasionally report a bug to Launchpad and even send a patch. I'd like to do it in a "proper" way so that it's ready for merging or improvement while not getting in way. I can't seem to find a decent simple How-to suited for my needs. So what I did so far: I have created a Launchpad account, reported the bug, installed Bazaar and setup SSH keys etc. Now if it was Github, I'd fork the repo, clone the forked repo, create a sanely named branch and do the work, commit + push, create a pull request using Github WUI. But it's not Github, and both LP and Bazaar architectures seem quite different from their Github/Git cunterparts. So could a kind soul save me from drowning in tons of documents and complile a straightforward step path, mainly the second part? Possibly including relevant CLI commands when they are needed? Edit: It seems that I should clarify if I'm asking specifically about Ubuntu packages (whatever it means) or Launchpad packages. I don't really care much about distinction between Ubuntu packages and non-Ubuntu packages. Any software could be in Ubuntu today and out of it tomorrow, or vice-versa. The development is what matters much more than distribution. Ao I was assuming that not every single package distributed in Ubuntu is hosted on Launchpad, an "official" or "default" workflow for Launchpad exists (well if all devs can agree on using Bazaar, why couldn't most of them agree on a patching workflow?), so I'm asking about the Launchpad way, not the Ubuntu way. And I chose AU because since the intersection is vast, I guess it's pretty on topic here.

    Read the article

  • Glimpse: Open Source Web Development

    - by Elizabeth Ayer
    We’re delighted to announce that Red Gate will be backing Glimpse! For those of you who aren’t familiar with the project, Glimpse is an open source tool which does for the server what Firebug does for the client. It’s been in beta for the last year, and we’re very excited to give Glimpse the support and dedicated effort needed to take it to a v1 and beyond. Glimpse’s founders (Nik Molnar and Anthony van der Hoorn) have joined Red Gate, and they’re just as excited as we are about the opportunities that active development of Glimpse will bring. They will continue to write code, support the community and drive the project forward (as they’ve done since its inception). With full-time attention on growing Glimpse and its community, users and developers can expect the project to accelerate, with frequent releases of new functionality. Red Gate is excited about its first major involvement with open source. You may well be wondering, though, why Red Gate is doing this. Glimpse dovetails beautifully with Red Gate’s .NET tools, which makes Glimpse an ideal framework for plugging in advanced, paid-for functionality (like performance analysis) the way web developers want to see it. As a means to this end, we will contribute to the Glimpse open source project in order to broaden its adoption and delight web developers. Since bringing in .NET Reflector in 2008, we’ve learnt sharp lessons from the community about the right and wrong ways to engage with developers, not to mention the enduring value of free. Glimpse further shows what the .NET community can achieve through open source collaboration, and we’re looking forward to working with the Glimpse community to make something enduring and awesome. Nik and Anthony, themselves passionate advocates of community-driven software, will continue to control the Glimpse project, steering it to best meet the needs of its users and contributors. If you have any questions or queries about Glimpse, or Red Gate’s involvement in the project, please tweet with the #glimpse hashtag, contact us at Red Gate on [email protected], or post to the Glimpse Development Forum on Google Groups.

    Read the article

  • What You Said: Where Do You Find Your Next Game?

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your favorite places and tricks for finding new video games to play. It turns out the least of your problems was finding new games! From the comments it became apparent How-To Geek readers had absolutely no problem finding new games to add to their gaming stable. Buzz writes: I have quite an elaborate procedure in finding my next game:For free games i simply follow the feeds on a few websites like Freegamer, LinuxGames, HappyPenguin and Penguspy. Every now and them i browse Wikipedia articles on free/FOSS games. For commercial games the procedure depends on what i enjoyed the most in that game:- If i enjoyed the story or the general feel: i usually start with a game i like and look for sequels, prequels, mods or spinoffs. I even go out on a limb and give other platforms (than a PC) a try, even if it usually means emulation. If you really enjoy a game series/saga it’s usually worth the effort.- If i enjoy the producer/gaming company then i seek out more of their games.- If i enjoy the technical achievements that went into making the game or if i am concerned for the system requirements of my gear i try to play games that are built on the same engine(s) as one of the games i ran smooth and enjoyed.- If i feel like playing a particular genre i usually start with a title i enjoyed and look for alternatives or similar games- You can always try searching for Game of The Year winners for a particular time period or other similar accomplishments. They usually yield great results. How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me?

    Read the article

  • nomachine NX: Text missing on all gtk interface (Unity and Gnome Classic)

    - by hansioux
    [Edit] I later realized my issue only occurs when I am using NX to remote access my machine. Therefore I edited the title and description. I have also found the temp solution, which is to "disable render extension" in the custom display settings. But doing so makes the NX experience very slow laggy, and not that nice to look at. [/EDIT] I did a fresh install on a new computer, and was trying to setup my fonts. When I log in remotely via NX, my the text are missing on all gtk based interfaces. That means most menues (except for unity), right click menues, applications themselves, terminal, and so on. About the only thing unaffected is firefox. all the texts are showing just fine for firefox. So that probably already says something about text permissions. I went to check if my fonts have the correct permissions and they do. I removed my custom settings from /etc/fonts/config.d, and still the texts are missing. There is a work around by using "disable render extension" in the custom display settings. How do I fix this issue permanently?

    Read the article

  • How to update entity states and animations in a component-based game

    - by mivic
    I'm trying to design a component-based entity system for learning purposes (and later use on some games) and I'm having some troubles when it comes to updating entity states. I don't want to have an update() method inside the Component to prevent dependencies between Components. What I currently have in mind is that components hold data and systems update components. So, if I have a simple 2D game with some entities (e.g. player, enemy1, enemy 2) that have Transform, Movement, State, Animation and Rendering components I think I should have: A MovementSystem that moves all the Movement components and updates the State components And a RenderSystem that updates the Animation components (the animation component should have one animation (i.e. a set of frames/textures) for each state and updating it means selecting the animation corresponding to the current state (e.g. jumping, moving_left, etc), and updating the frame index). Then, the RenderSystem updates the Render components with the texture corresponding to the current frame of each entity's Animation and renders everything on screen. I've seen some implementations like Artemis framework, but I don't know how to solve this situation: Let's say that my game has the following entities. Each entity have a set of states and one animation for each state: player: "idle", "moving_right", "jumping" enemy1: "moving_up", "moving_down" enemy2: "moving_left", "moving_right" What are the most accepted approaches in order to update the current state of each entity? The only thing that I can think of is having separate systems for each group of entities and separate State and Animation components so I would have PlayerState, PlayerAnimation, Enemy1State, Enemy1Animation... PlayerMovementSystem, PlayerRenderingSystem... but I think this is a bad solution and breaks the purpose of having a component-based system. As you can see, I'm quite lost here, so I'd very much appreciate any help.

    Read the article

  • Best solution for getting referral information in PHP

    - by absentx
    I am currently redoing some link structuring on a website. In the past we have used specific php files on the last step to direct the user to the proper place. Example: www.mysite.com/action/go-to-blue.php or www.mysite.com/action/short/go-to-red.php www.mysite.com/action/tall/go-to-red.php We are now restructuring to eliminate the /short/ or /tall/ directory. What this means is now "go-to-blue.php" will be doing some extra processing to make sure it sends the visitor to the proper place. The static method of the past was quite effective, because, well, if they left from that page we knew we had it right. Now since we are 301 redirecting action/short/go-to-red.php to just action/go-to-red.php it is quite important on "go-to-red.php" that we realize a user may have been redirected from /short/ or /tall/. So right now I am using HTTP_REFERRER and of course in my testing that works fine, but after a lot of reading it is clear that this is not a solid solution, so I was starting to brainstorm on other ways to check and make sure we get the proper referral information. If we could check HTTP_REFERRER plus some other test, I would feel confident we have a pretty good system in place to send the visitor to the right place. Some questions/comments: Could I use a session variable or a cookie to accomplish this goal? If so, would that be maintained through the 301 redirect? I don't see why it wouldn't be.. Passing the url in the url is not an option in this case.

    Read the article

  • Ad-hoc String Manipulation With Visual Studio

    - by Liam McLennan
    Visual studio supports relatively advanced string manipulation via the ‘Quick Replace’ dialog. Today I had a requirement to modify some html, replacing line breaks with unordered list items. For example, I need to convert: Infrastructure<br/> Energy<br/> Industrial development<br/> Urban growth<br/> Water<br/> Food security<br/> to: <li>Infrastructure</li> <li>Energy</li> <li>Industrial development</li> <li>Urban growth</li> <li>Water</li> <li>Food security</li> This cannot be done with a simple search-and-replace but it can be done using the Quick Replace regular expression support. To use regular expressions expand ‘Find Options’, check ‘Use:’ and select ‘Regular Expressions’ Typically, Visual Studio regular expressions use a different syntax to every other regular expression engine. We need to use a capturing group to grab the text of each line so that it can be included in the replacement. The syntax for a capturing group is to replace the part of the expression to be captured with { and }. So my regular expression: {.*}\<br/\> means capture all the characters before <br/>. Note that < and > have to be escaped with \. In the replacement expression we can use \1 to insert the previously captured text. If the search expression had a second capturing group then its text would be available in \2 and so on. Visual Studio’s quick replace feature can be scoped to a selection, the current document, all open documents or every document in the current solution.

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >