Search Results

Search found 9744 results on 390 pages for 'k means'.

Page 155/390 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Why do I have a gnomekeyring.IOError when doing "quickly share"?

    - by Agmenor
    When I want to push my app to Launchpad by doing quickly share --verbose, I get the following Gnome Keyring error: Get Launchpad Settings Traceback (most recent call last): File "/usr/share/quickly/templates/ubuntu-application/share.py", line 101, in <module> launchpad = launchpadaccess.initialize_lpi() File "/usr/lib/python2.7/dist-packages/quickly/launchpadaccess.py", line 91, in initialize_lpi allow_access_levels=["WRITE_PRIVATE"]) File "/usr/lib/python2.7/dist-packages/launchpadlib/launchpad.py", line 539, in login_with credential_save_failed, version) File "/usr/lib/python2.7/dist-packages/launchpadlib/launchpad.py", line 342, in _authorize_token_and_login authorization_engine.unique_consumer_id) File "/usr/lib/python2.7/dist-packages/launchpadlib/credentials.py", line 282, in load return self.do_load(unique_key) File "/usr/lib/python2.7/dist-packages/launchpadlib/credentials.py", line 336, in do_load 'launchpadlib', unique_key) File "/usr/lib/python2.7/dist-packages/keyring/core.py", line 34, in get_password return _keyring_backend.get_password(service_name, username) File "/usr/lib/python2.7/dist-packages/keyring/backend.py", line 154, in get_password items = gnomekeyring.find_network_password_sync(username, service) gnomekeyring.IOError ERROR: share command failed Aborting This used to work, so this means that I already have SSH and GPG configured. This is probably part of the explanation: I have this error when I am connected to this machine through a ssh tunnel with X forwarding. But I don't have it when I have physical access to the computer. Could you please give me some indications on what to do?

    Read the article

  • Ubuntu security with services running from /opt

    - by thejartender
    It took me a while to understand what's going on here (I think), but can someone explain to me if there are security risks with regards to my logic of what's going on here as I am trying to set up a home web server as a developer with some good Linux knowledge? Ubuntu is not like other systems, as it has restricted the root user account. You can not log in as root or su to root. This was a problem for me as I have had to install numerous applications and services to /opt as per user documentation (XAMPPfor Linux is a good example). The problem here is that this directory is owned by root:root. I notice that my admin user account does not belong to root group through the following command: groups username so my understanding is that even though the files and services that I place in /opt belong to root, executing them by means of sudo (as required) does not mean that they are run as root? I imagine that the sudo command is hidden somewhere under belonging to the root user and has a 775 permission? So the question I have is if running a service like Tomcat, Apcahe, etc exposes my system like on other systems? Obviously I need to secure these in configurations, but isn't the golden rule to never run something as root? What happens if I have multiple services running under same user/group with regards to a compromised server?

    Read the article

  • Properly force SSL with .htaccess, no double authentication

    - by cwd
    I'm trying to force SSL with .htaccess on a shared host. This means there I only have access to .htaccess and not the vhosts config. I know you can put a rule in the VirtualHost config file to force SSL which will be picked up there (and acted upon first), preventing double authentication, but I can't get to that. Here's the progress I've made: Config 1 This works pretty well but it does force double authentication if you visit http://site.com - once for http and then once for https. Once you are logged in, it automatically redirects http://site.com/page1.html to the https coutnerpart just fine: RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteEngine on RewriteCond %{HTTP_HOST} !(^www\.site\.com*)$ RewriteRule (.*) https://www.site.com$1 [R=301,L] AuthName "Locked" AuthUserFile "/home/.htpasswd" AuthType Basic require valid-user Config 2 If I add this to the top of the file, it works a lot better in that it will switch to SSL before prompting for the password: SSLOptions +StrictRequire SSLRequireSSL SSLRequire %{HTTP_HOST} eq "site.com" ErrorDocument 403 https://site.com It's clever how it will use the SSLRequireSSL option and the ErrorDocument403 to redirect to the secure version of the site. My only complaint is that if you try and access http://site.com/page1.html it will redirect to https://site.com/ So it is forcing SSL without a double-login, but it is not properly forwarding non-SSL resources to their SSL counterparts. Regarding the first config, Insyte mentioned "using mod_rewrite to perform a simple redirect is a bit of overkill. Use the Redirect directive instead. It's possible this may even fix your problem, as I believe mod_rewrite rules are some of the last directives to be processed, just before the file is actually grabbed from the filesystem" I have not had no such luck on finding a force-ssl config option with the redirect directive and so have been unable to test this theory.

    Read the article

  • Upgrading Visual Studio 2010

    - by Martin Hinshelwood
    I have been running Visual Studio 2010 as my main development studio on my development computer since the RC was released. I need to upgrade that to the RTM, but first I need to remove it. Microsoft have done a lot of work to make this easy, and it works. Its as easy as uninstalling from the control panel. I have had may previous versions of Visual Studio 2010 on this same computer with no need to rebuild to remove all the bits. Figure: Run the uninstall from the control panel to remove Visual Studio 2010 RC Figure: The uninstall removes everything for you.  Figure: A green tick means the everything went OK. If you get a red cross, try installing the RTM anyway and it should warn you with what was not uninstalled properly and you can remove it manually.   Once you have VS2010 RC uninstalled installing should be a breeze. The install for 2010 is much faster than 2008. Which could take all day, and then some on slower computers. This takes around 20 minutes even on my small laptop. I always do a full install as although I have to do c# I sometimes get to use a proper programming language VB.NET. Seriously, there is nothing worse than trying to open a project and the other developer has used something you don't have. Its not their fault. Its yours! Save yourself the angst and install Fully, its only 5.9GB. Figure: I always select all of the options.   Now go forth and develop! Preferably in VB.NET…   Technorati Tags: Visual Studio,VS2010,VS 2010

    Read the article

  • Taking our Friendships to the next level.

    - by RedAndTheCommunity
    Red Gate have been running the Friends of Red Gate program for years now, and over that time we've built some great relationships with some truly awesome members of the SQL and .NET communities. When I took over the running of the program from Annabel in 2011, I was overwhelmed by the enthusiasm and commitment of our Friends. There were just so many of them, however, that it was hard to make the most of the relationships we had with people, and I wanted to fix that. I decided to survey all our Friends, to find out what they wanted to get out of, and put into, being in the Friends of Red Gate (FoRG) program. From the results of that survey, I identified 30 FoRGs that were really willing and able to go that step further to help Red Gate improve their tools, improve their relationship with the community, and improve the Friends of Red Gate program. Those 30 Friends of Red Gate have been awarded 'FoRG+' status. That means they'll: Have a closer relationship with the product teams, by getting involved in projects Have even more access to the inside track about the tools they're interested in Get the opportunity to come visit us at the Red Gate office and really influence the development of the tools. Plus more, depending on how the individual FoRG+ wants to work with us. This doesn't mean I've forgotten our other Friends; I'm working on ways to improve their experience of the Friends of Red Gate program. I'll write about them in another post. If you're an existing Friend of Red Gate, and you're interested in finding out how to get involved in the FoRG+ program, then I'd love to chat to you. For anyone that's interested in joining the Friend of Red Gate program, take a look at the web page dedicated to the program, and get in touch at [email protected] to be put on the waiting list for our 2013 program.

    Read the article

  • How to create per-vertex normals when reusing vertex data?

    - by Chris Smith
    I am displaying a cube using a vertex buffer object (gl.ELEMENT_ARRAY_BUFFER). This allows me to specify vertex indicies, rather than having duplicate vertexes. In the case of displaying a simple cube, this means I only need to have eight vertices total. Opposed to needing three vertices per triangle, times two triangles per face, times six faces. Sound correct so far? My question is, how do I now deal with vertex attribute data such as color, texture coordinates, and normals when reusing vertices using the vertex buffer object? If I am reusing the same vertex data in my indexed vertex buffer, how can I differentiate when vertex X is used as part of the cube's front face versus the cube's left face? In both cases I would like the surface normal and texture coordinates to be different. I understand I could average the surface normal, however I would like to render a cube. Also, this still doesn't work for texture coordinates. Is there a way to save memory using a vertex buffer object while being able to provide different vertex attribute data based on context? (Per-triangle would be idea.) Or should I just duplicate each vertex for each context in which it gets rendered. (So there is a one-to-one mapping between vertex, normal, color, etc.) Note: I'm using OpenGL ES.

    Read the article

  • What are the risks of installing a "bad quality" package?

    - by ændrük
    When I try to install sonic-visualiser_1.9cc-1_amd64.deb via the Software Center the following warning message is displayed: The package is of bad quality The installation of a package which violates the quality standards isn't allowed. This could cause serious problems on your computer. Please contact the person or organisation who provided this package file and include the details beneath. Lintian check results for /home/ak/Downloads/sonic-visualiser_1.9cc-1_amd64.deb: Use of uninitialized value $ENV{"HOME"} in concatenation (.) or string at /usr/bin/lintian line 108. E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/bin/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/bin/sonic-visualiser 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/applications/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/applications/sonic-visualiser.desktop 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/doc/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/doc/sonic-visualiser/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/doc/sonic-visualiser/CHANGELOG 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/doc/sonic-visualiser/COPYING 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/doc/sonic-visualiser/README 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/mimelnk/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/mimelnk/application/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/mimelnk/application/x-sonicvisualiser-layer.desktop 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/mimelnk/application/x-sonicvisualiser.desktop 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/pixmaps/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/pixmaps/sv-icon-light.svg 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/pixmaps/sv-icon.svg 1000/1000 I understand that this means the package doesn't meet Debian policy and I know how to override the warning and install the package anyway. What are the risks of doing so?

    Read the article

  • Push or Pull Mobile Coupons?

    - by David Dorf
    Mobile phones allow consumers to receive coupons in context, which increases their relevance and therefore redemption rates. Using your current location, you can get coupons that can be redeemed nearby for the things you want now. Receiving a coupon for something you wanted last week or something you might buy next month just isn't as valuable. I previously talked about Placecast and their concept of pushing offers to mobile phones that transgress "geo-fences" around points of interest, like store locations. This push model is an automatic reminder there are good deals just up ahead. This model works well in dense cities where people walk, but I question how effective it will be in the suburbs where people are driving. McDonald's recently ran a campaign in Finland where they pushed offers to GPS devices when cars neared their restaurants. Amazingly, they achieved a 7% click-through rate. But 8coupons.com sees things differently. They prefer the pull model that requires customers to initiate a search for nearby coupons, and they've done some studies to better understand what "nearby" means. It turns out that there are concentric search circles that emanate from your home and work. From inner to outer, people search for food, drink, shopping, and entertainment. Intuitively, that feels about right. So the question is, do consumers prefer the push or pull model for offers? No doubt the market is big enough for both. These days its not good enough to just know who your customers are -- you also need to know where they are so you can catch them in the right moment. According to Borrell Associates, redemption rates of mobile coupons are 10x that of traditional mail and newspaper coupons. One thing is for sure; assuming 85% of consumers regularly spend money within 5 miles of home and work, location-based coupons make tons of sense.

    Read the article

  • Windows Azure Virtual Machines - Make Sure You Follow the Documentation

    - by BuckWoody
    To create a Windows Azure Infrastructure-as-a-Service Virtual Machine you have several options. You can simply select an image from a “Gallery” which includes Windows or Linux operating systems, or even a Windows Server with pre-installed software like SQL Server. One of the advantages to Windows Azure Virtual Machines is that it is stored in a standard Hyper-V format – with the base hard-disk as a VHD. That means you can move a Virtual Machine from on-premises to Windows Azure, and then move it back again. You can even use a simple series of PowerShell scripts to do the move, or automate it with other methods. And this then leads to another very interesting option for deploying systems: you can create a server VHD, configure it with the software you want, and then run the “SYSPREP” process on it. SYSPREP is a Windows utility that essentially strips the identity from a system, and when you re-start that system it asks a few details on what you want to call it and so on. By doing this, you can essentially create your own gallery of systems, either for testing, development servers, demo systems and more. You can learn more about how to do that here: http://msdn.microsoft.com/en-us/library/windowsazure/gg465407.aspx   But there is a small issue you can run into that I wanted to make you aware of. Whenever you deploy a system to Windows Azure Virtual Machines, you must meet certain password complexity requirements. However, when you build the machine locally and SYSPREP it, you might not choose a strong password for the account you use to Remote Desktop to the machine. In that case, you might not be able to reach the system after you deploy it. Once again, the key here is reading through the instructions before you start. Check out the link I showed above, and this link: http://technet.microsoft.com/en-us/library/cc264456.aspx to make sure you understand what you want to deploy.  

    Read the article

  • Is it bad practice for services to share a database in SOA?

    - by Paul T Davies
    I have recently been reading Hohpe and Woolf's Enterprise Integration Patterns, some of Thomas Erl's books on SOA and watching various videos and podcasts by Udi Dahan et al. on CQRS and Event Driven systems. Systems in my place of work suffer from high coupling. Although each system theoretically has its own database, there is a lot of joining between them. In practice this means there is one huge database that all systems use. For example, there is one table of customer data. Much of what I've read seems to suggest denormalising data so that each system uses only its database, and any updates to one system are propagated to all the others using messaging. I thought this was one of the ways of enforcing the boundaries in SOA - each service should have its own database, but then I read this: http://stackoverflow.com/questions/4019902/soa-joining-data-across-multiple-services and it suggests this is the wrong thing to do. Segregating the databases does seem like a good way of decoupling systems, but now I'm a bit confused. Is this a good route to take? Is it ever recommended that you should segregate a database on, say an SOA service, an DDD Bounded context, an application, etc?

    Read the article

  • How to Hibernate from .NET Apps and How to enable Hibernate in Windows XP

    The usage of Computer desktop or laptop is increased all around the world phenomenally. This link gives you the picture on how power consumption is for various devices we use daily. to reduce the power consumption Hibernate is one of the best way provided by default in Windows Vista or Windows 7. Hibernate feature enables you to close the machine without closing your applications, that means the applications will be restored as they were once we restart the machine. Hibernate feature is not enabled in Windows XP by default. I’ve seen many people that they run (do not switch off) the machines months and months as they do not want to close the windows or applications running in Windows XP. below are the steps to enable Hibernate in Windows XP. Right click on Desktop. Click on properties. Go to screen save tab. Click on power button Select Hibernate tab Check the checkbox “Enabled Hibernate” Apply the settings. Now when you try to shutdown, “Shut down windows” dialog shows “Hibernate” options. Now you can safely close the machine without closing your applications or windows as they will be restored once you on the machine. </SPAN? Some time you might want to provide this future programmatically for the applications you develop for windows. Generally you might want to provide this option in windows applications where process needs huge time. Download managers are the one of the best example. below is the code to do a Hibernate from the .NET code. using System.Windows.Forms;namespace CodeKicks.WinApp.Machine{ public static class MyMachineHelper { public static void DoHibernate() { //Application.SetSuspendState(PowerState.Suspend, true, false); Application.SetSuspendState(PowerState.Hibernate, true, false); } }} span.fullpost {display:none;}

    Read the article

  • Can web apps allow fast data-typists to "type-ahead"?

    - by user61852
    In some data entry contexts, I've seen data typists, type really fast and know so well the app they use, and have a mechanic quality in their work so that they can "type ahead", ie continue typing and "tab-bing" and "enter-ing" faster than the display updates, so that in many occasions they are typing in the data for the next form before it draws itself. Then when this next entry form appears, their keystrokes fill the text boxes and they continue typing, selecting etc. In contexts like this, this speed is desirable, since this persons are really productive. I think this "type ahead of time" is only possible in desktop apps, but I may be wrong. My question is whether this way of handling the keyboard buffer (which in desktop apps require no extra programming) is achievable in web apps, or is this impossible because of the way web apps work, handle sessions, etc (network latency and the overhead of generating new web pages ) ? Edit: By "type ahead" I mean "keyboard type ahead" (typing faster than the next entry form can load), not suggets-as-you-type-like-google type ahead. Typeahead is a feature of computers and software (and some typewriters) that enables users to continue typing regardless of program or computer operation—the user may type in whatever speed he or she desires, and if the receiving software is busy at the time it will be called to handle this later. Often this means that keystrokes entered will not be displayed on the screen immediately. This programming technique for handling user what is known as a keyboard buffer.

    Read the article

  • Which would be a better way to load data via ajax

    - by Mike
    I am using google maps and returning html/lat/long from my MySQL database Currently A user picks a business category e.g; "Video Production". an ajax call is sent to a CodeIgniter controller the Controller then queries the db, and returns the following data via JSON Lat/Long of the marker HTML for the popup window this is approximately 34 rows in the database across two tables per business the ajax call receives this data and then plots the marker along with the html onto the map The data that is returned from the controller is one big json object... This is done for all businesses that exist in the Video Production category (currently approx 40 businesses). As you can see, pulling this data for multiple categories (100s of businesses) can get very very taxing on the server. My question is Would it be more beneficial to modify the process flow as such: a user picks a business category e.g; "Video Production". an ajax call is sent to a CodeIgniter controller the controller then queries the database for the location base information lat/long level (used to change marker icon color) This would be a single row per business with several columns the ajax call receives this data and then plots the marker on the map when the user clicks a marker an ajax call is sent to a CodeIgniter Controller the controller queries the database for the HTML and additional data based on business_id and if not, what are some better suggestions to this problem? In summary this means rather than including the HTML and additional data along for each business, only submitting minimal location information and then re-query for that information when each business marker is clicked. Potential Downsides longer load times when a user clicks a marker icon more code?? more queries to the database

    Read the article

  • Where could Distributed Version Control Systems currently be in Gartner's hype cycle?

    - by dukeofgaming
    Edit: Given the recent downvoting (+8/-6 at this point) it was made clear to me that Gartner's lifecycle is a biased metric from a programmer's perspective. This is something that is part of a paper I'm going to present to management, and management types are part of Gartner's audience. Giving DVCS exposure & enthusiasm (that "could" be deemed as hype, or at least attacked as such), think about the following question when reading this one: "how could I use Gartner's hype cycle to convince management that DVCSs are ready (or ready-enough) for us, and that it is not just hype" Just asking if DVCSs is hype wouldn't be constructive, Gartner's hype cycle is a more objective instrument than just asking that (even if this instrument is regarded as biased). If you know any other instrument please, by all means, mention it. Edit #2: I agree that Gartner's Life Cycle is not for every technology, but I consider it may have generated enough buzz to be considered hype by some, so it maybe deserves to be at least evaluated/pondered as such by using this instrument in order to prove/disprove it to whatever degree. I'm an advocate of DVCS, BTW. I'm doing research for a whitepaper I'm writing in favor of DVCS adoption at company and I stumbled upon the concept of social proof. I want to prove that the social proof of DVCS adoption is not necessarily cargo cult and doing further research I now stumbled upon Gartner's hype cycle which describes technology maturity in 5 phases. My question is: what could be an indicator of the current location of Distributed Version Control Systems (I mean git, mercurial, bazaar, etc. in general) at a particular phase in the hype cycle?... in other (less convoluted) words, would you say that currently expectations of DVCSs are a) starting, b)inflated, c)decreasing (disillusionment), d)increasing (enlightenment) or e)stabilizing (mature) and (more importantly) why? I know it is a hard question and there is subjectivity involved, but I'll grant the answer (and the traditional cookie) to the clearest argument/evidence for a particular phase.

    Read the article

  • Strange if-else branching behavior in a fragment shader

    - by Winged
    In my fragment shader I have passed an uniform int uLightType variable, which indicates what type of light is in usage right now. The problem is that if-else branching does not work correctly - the fragment shader performs instructions in every if statement block. if (uLightType == 1) { // Spotlight light type vec3 depthTextureCoord = vDepthPosition.xyz / vDepthPosition.w; shadowDepth = unpack(texture2D(uDepthMapSampler, depthTextureCoord.xy)); } else if (uLightType == 2) { // Omni-directional light type shadowDepth = unpack(textureCube(uDepthCubemapSampler, -lightVec)); } In the case when uLightType equals 1, unless I comment out the content of the second if block, it assigns an another value to shadowDepth. Also while uLightType equals 1, when I remove the second 'if' block and change == to != like in the sample code below, nothing happens (which means that uLightType really equals 1). if (uLightType != 1) { // Spotlight light type vec3 depthTextureCoord = vDepthPosition.xyz / vDepthPosition.w; shadowDepth = unpack(texture2D(uDepthMapSampler, depthTextureCoord.xy)); } Also, when I manually create an int variable (which is not an uniform) like this: var lightType = 1; and replace uLightType with it in the if-else branch, everything works fine, so I guess it have something to do with the fact that uLightType is the uniform.

    Read the article

  • Iphone/Android app – chatroom development – what framework & hosting needs?

    - by MikaelW
    I have some experience regarding IPhone and Android development but I am now struggling to solve a new class of problem: apps that involve a client/server chatroom feature. That is, an app when people can exchange text over the internet, and without having the app to constantly “pull” content from the server. So that problem can’t be solved with a normal php/mysql website, there must be some kind of application running on a server that is able to send message from the server to the phone, rather than having the phone to check for new messages every 10 seconds… So I’m looking for ways to solve the different problems here: What framework should I use on the two sides (phone / server)? It should be some kind of library that doesn’t prevent me to write paid apps. It should also be possible to have the same server for the Iphone and android version of the app. What server / hosting solution do I need with what sort of features, I just have no experience regarding server application that can handle and initiate multiple connections and are hosted on hardware that is always online I tried to find resources online but couldn’t so far, either the libraries had the wrong kind of license/language or I just didn’t understand… Sometimes there were nice tutorial but for different needs such as peer2peer chat over local network… Same with the server and the hosting problem, not sure where to start really, I’m calling for help and I promise I will complete this page with notes about the experience I will get :-) Obviously the ideal would be to find a tutorial I missed that include client code, server code and a free scalable server… That being said, If I see something as good, it probably means that I have eaten the wrong kind of mushroom again… So, failing that, any pointer which might help me toward that quest, would be greatly appreciated. Thanks in advance. Mikael

    Read the article

  • Automated Error Reporting in .NET Reflector - harnessing the most powerful test rig in existence

    - by Alex.Davies
    I know a testing system that will find more bugs than all the unit testing, integration testing, and QA you could possibly do. And the chances are you're not using it. It's called your users. It's a cliché that you should test so that you find your bugs rather than your users. Of course you should. But it's also a cliché that no software is ever shipped bug-free. Lost cause? No, opportunity! I think .NET Reflector 6 is pretty stable. In fact I know exactly how stable it is, because some (surprisingly high) proportion of its users tell me every time it crashes: If they press "Send Error Report", I get: And then I fix it. As a rough guess, while a standard stack trace is enough to fix a problem 30% of the time, having all those local variables in the stack trace means I can fix it about 80% of the time. How does this all happen? Did it take ages to code this swish system? Nope, it was one checkbox in SmartAssembly. It adds some clever code to your assembly to capture local variables every time an exception is thrown, and to ask your user to report it to you, with a variety of other useful information. Of course not all bugs show up as exceptions. But if you get used to knowing that SmartAssembly will tell you when an exception happens, you begin to change your coding style. Now, as long as an exception gets thrown in any situation you don't expect, you'll fix it if it ever happens. You'll start throwing exceptions liberally, and stop having to think about whether tiny edge cases are possible, as long as they throw an exception if they happen.

    Read the article

  • Which open source PHP project has the 'perfect' OOP design I can learn from?

    - by aditya menon
    I am a newbie to OOP, and I learn best by example. You could say this question is similar to Which Scala open source projects should I study to learn best coding practices - but in PHP. I have heard-tell that Symfony has the best 'architecture' (I will not pretend I know what that exactly means), as well as Doctrine ORM. Is it worth it to spend many months reading the source code of these projects, trying to deduce the patterns used and learning new tricks? I have seen equal number of web pages dissing and liking Zend's codebase (will provide links if deemed necessary). Do you know of any other project that would make any veteran OOP developer shed tears of joy? Please let me add that practicality and scope of use is not a concern at all here - I just want to do: Pick a project that has a codebase deemed awesome by devs way better and greater than me. Write code that achieves what the project does. Compare results and try to learn what I don't know. Basically, an academic interest codebase. Any recommendations please?

    Read the article

  • How best to implement HTML5 support for my validation library

    - by Vivin Paliath
    I have created an annotation-based validation library called regula. There seems to be some amount of interest around the framework and the next thing I'd like to do is to support HTML5 validation. Originally I figured that I would check to see if the browser supported the HTML5 validation that has been specified and to either emulate or delegate to built-in regula equivalents. This is trivial for things like required, but once you start getting into the date-validation, it gets tricky (date widgets, localization, etc.). So I have a few options in front of me: Full HTML5 Shim along with widgets (for date stuff etc.): I feel like this is overkill and essentially reinventing the wheel since this is already covered by things like modernizr. Use HTML5 validation if available (either native, or provided by shim; otherwise ignore): What this means is that if HTML5 validation is available (natively or through a shim) I will use it, otherwise I will ignore it. I'm leaning towards the latter since currently if someone wants to use HTML5 validation, they will most probably require a shim since not all browsers support HTML5. Which option do you think is better?

    Read the article

  • FreeType2 Crash on FT_Init_FreeType

    - by JoeyDewd
    I'm currently trying to learn how to use the FreeType2 library for drawing fonts with OpenGL. However, when I start the program it immediately crashes with the following error: "(Can't correctly start the application (0xc000007b))" Commenting the FT_Init_FreeType removes the error and my game starts just fine. I'm wondering if it's my code or has something to do with loading the dll file. My code: #include "SpaceGame.h" #include <ft2build.h> #include FT_FREETYPE_H //Freetype test FT_Library library; Game::Game(int Width, int Height) { //Freetype FT_Error error = FT_Init_FreeType(&library); if(error) { cout << "Error occured during FT initialisation" << endl; } And my current use of the FreeType2 files. Inside my bin folder (where debug .exe is located) is: freetype6.dll, libfreetype.dll.a, libfreetype-6.dll. In Code::Blocks, I've linked to the lib and include folder of the FreeType 2.3.5.1 version. And included a compiler flag: -lfreetype My program starts perfectly fine if I comment out the FT_Init function which means the includes, and library files should be fine. I can't find a solution to my problem and google isn't helping me so any help would be greatly appreciated.

    Read the article

  • The Internet of Things & Commerce: Part 3 -- Interview with Kristen J. Flanagan, Commerce Product Management

    - by Katrina Gosek, Director | Commerce Product Strategy-Oracle
    Internet of Things & Commerce Series: Part 3 (of 3) And now for the final installment my three part series on the Internet of Things & Commerce. Post one, “The Next 7,000 Days”, introduced the idea of the Internet of Things, followed by a second post interviewing one of our chief commerce innovation strategists, Brian Celenza.  This final post in the series is an interview with Kristen J. Flanagan, lead product manager for Oracle Commerce omnichannel strategy. She takes us through the past, present, and future of how our Commerce Solution is re-imagining the way physical and digital shopping come together. ------- QUESTION: It’s your job to stay on top of what our customers’ need to not only run their online businesses effectively, but also to make sure they have product capabilities they can innovate and grow on. What key trend has been top-of-mind for you and our customers around this collision of physical and digital shopping? Kristen: I’ll agree with Brian Celenza that hands down mobile has forced a major disruption in shopping and selling behavior. A few years ago, mobile exploded at a pace I don't think anyone was expecting. Early on, we saw our customers scrambling to establish a mobile presence---mostly through "screen scraping" technologies. As smartphones continued to advance (at lightening speed!), our customers started to investigate ways to truly tap in to their eCommerce capabilities to deliver the mobile experience. They started looking to us for a means of using the eCommerce services and capabilities to deliver a mobile experience that is tailored for mobile rather than the desktop experience on a smaller screen. In the future, I think we'll see customers starting to really understand what their shoppers need and expect from a mobile offering and how they can adapt their content and delivery of that content to meet those needs. And, mobile shopping doesn’t stop at the consumer / buyer. Because the in-store experience is compelling and has advantages that digital just can't offer, we're also starting to see the eCommerce services being leveraged for mobile for in-store sales associates. Brick-and-mortar retailers are interested in putting the omnichannel product catalog, promotions, and cart into the hands of knowledgeable associates. Retailers are now looking to connect and harness the eCommerce data in-store so that shoppers have a reason to walk-in. I think we'll be seeing a lot more customers thinking about melding the in-store and digital experiences to present a richer offering for shoppers.    QUESTION: What are some examples of what our customers are doing currently to bring these concepts to reality? Kristen: Well, without question, connecting digital and brick-and-mortar worlds is becoming tablestakes for selling experiences. If a brand has a foot in both worlds (i.e., isn’t a pureplay online retailer), they have to connect the dots because shoppers – whether consumers or B2B buyers –don't think in clearly defined channels anymore. The expectation is connectedness – for on- and offline experiences, promotions, products, and customer data. What does this mean practically for businesses selling goods on- and offline? It touches a lot of systems: inventory info on the eCommerce site, fulfillment options across channels (buy online/pickup in store), order information (representing various channels for a cohesive view of shopper order history), promotions across digital and store, etc.  A few years ago, the main link between store and digital was the smartphone. We all remember when “apps” became a thing and many of our customers were scrambling to get a native app out there. Now we're seeing more strategic thinking around the benefits of mobile web vs. native and how that ties in to the purpose and role of mobile within the digital channel. Put it more broadly, how these pieces fit together in the overall brand puzzle.  The same could be said for “showrooming.” Where it was a major concern (i.e., shoppers using stores to look at merchandise and then order online from Amazon), in recent months, it’s emerged that the inverse is now becoming a a reality as well. "Webrooming" (using digital sites to do research before making a purchase in the store) is a new behavior pure play retailers are challenged with. There are many technologies, behaviors, and information that need to tie together to offer a holistic omnichannel shopping experience. As a result, brands are looking for ways to connect the digital and in-store experiences to bridge the gaps: shared assortments across channels, assisted selling apps that arm associates with information about shoppers, shared promotions, inventory, etc. QUESTION: How has Oracle Commerce been built to help brands make the link between in-store and digital over the last few years? Kristen: Over the last seven years, the product has been in step with the changes in industry needs. Here is a brief history of the evolution: Prior to Oracle’s acquisition of ATG and Endeca, key investments were made to cross-channel functionality that we are still building on today. Commerce Service Center (v2007.1) ATG introduced the Commerce Service Center in 2007.1 and marked the first entry into what was then called “cross-channel.” The Commerce Service Center is a call-center-agent-facing application that enables agents to see shopper orders, online catalog, promotions, and pricing. It is tightly integrated with the eCommerce capabilities of the platform and commerce engine and provided a means of connecting data from the call center and online channels.  REST services framework (v9.1)  In v9.1 we introduced the REST services framework and interface in the Platform that enabled customers to use ATG web services in other applications. This framework has become the basis for our subsequent omni-channel features and functionality. Multisite Architecture (v10) With the v10 release, we introduced the Multisite Architecture, which enabled customers to manage multiple sites (and channels) within a single instance of the BCC. Customers could create site- and channel-specific catalogs, promotions, targeters, and scenarios. Endeca Page Builder (2.x) / Experience Manager (3.x) With the introduction of Endeca for Mobile (now part of the core platform, available through the reference store – see blow) on top of Page Builder (and then eventually Experience Manager), Endeca gave business users the tools to create and manage native and mobile web applications. And since the acquisition of both ATG (2011) and Endeca (2012), Oracle Commerce has leveraged the best of each leading technology’s capabilities for omnichannel commerce to continue to drive innovation for our customers. Service enablement of core Oracle Commerce capabilities (v10.1.1, 10.2, & 11) After the establishment of the REST services framework and interface, we followed up in subsequent releases with service enablement of core Oracle Commerce capabilities throughout the iOS native app and the enablement of the core Commerce Service Center features. The result is that customers can leverage these services for their integrations with other systems, as well as their omnichannel initiatives.  Mobile web reference application (v10.1) In 10.1 we introduced the shopper-facing mobile reference application that showed how to use Oracle Commerce to deliver a mobile web experience for shoppers. This included the use of Experience Manager and cartridges to drive those experiences on select pages.  Native (iOS) reference application (v10.1.1)  We came out with the 10.1.1 shopper-facing native iOS ref app that illustrated how to use the Commerce REST services to deliver an iOS app. Also included Experience Manager-driven pages.   Assisted Selling reference application (v10.2.1)  The Assisted Selling reference application is our first reference application designed for the in-store associate. This iOS app shows customers how they can use Oracle Commerce data and information to provide a high-touch, consultative sales environment as well as to put the endless aisle into hands of their associates. Shoppers can start a cart online, and in-store associates can access that cart via the application to provide more information or add products and then transact using the ATG engine. Support for Retail promotions (v11) As part of the v11 release, we worked with teams in the Oracle Retail Global Business Unit (RGBU) to assess which promotion types and capabilities are supported across our products. Those products included Oracle Commerce, Oracle Point of Service (ORPOS), and Oracle Retail Price Management (RPM). The result is that customers can now more easily support omnichannel use cases between the store and digital.  Making sure Oracle Commerce can help support the omnichannel needs of our customers is core to our product strategy. With 89% of consumers now use two or more channels to make a single purchase, ensuring that cross-channel interactions are linked is critical to a great customer experience – and to sales. As Oracle Commerce evolves, we want to make it simple for organizations to create, deliver, and scale experiences across touchpoints with our create once, deploy commerce anywhere framework. We have a flexible, services-oriented architecture that allows data, content, catalogs, cart, experiences, personalization, and merchandising to be shared across touchpoints and easily extended in to new environments like mobile, social, in-store, Call Center, and new Websites. [For the latest downloads and Oracle Commerce documentation, please visit the Oracle Technical Network.] ------ Thank you to both Brian and Kristen for their contributions and to this blog series and their continued thought leadership for Oracle Commerce. We are all looking forward to the coming years of months of new shopping behaviors and opportunities to innovate. Because – if the digital fabric of our everyday lives continues to change at the same pace – the next five years (that just under 2,000 days), will be dramatic. ---------- THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT

    Read the article

  • How to talk a client out of a Flash website?

    - by bunglestink
    I have recently been doing a bunch of web side projects through word of mouth recommendations only. Although I am much more a of a programmer than a designer by any means, my design skills are not terrible, and do not hate dealing with UI like many programmers. As a result, I find myself lured into a bunch of side projects where aside from a minimal back end for content administration, most of the programming is on front end interfaces (read javascript/css). By far the biggest frustration I have had is convincing clients that they do not want Flash. Aside the fact that I really do not enjoy Flash "development", there are many practical reasons why Flash is not desirable (lack of compatibility across devices, decreased client accessibility, plug-in requirements, increased development time, etc.). Instead of just flat out telling the clients "I will not build you a flash website", I would much rather use tactics to convince/explain to them that this is not what they actually want, ie: meet their requirements any better than standard html/css/js and distract users from their content. What kind of first hand experience do others have with this? How do you explain to someone that javascript/css/AJAX is usually a better option for most websites? Why do people want to use Flash so bad to begin with? This question pertains to clients who do not have any technical reasons for wanting flash, but just want it because they think it makes pretty websites.

    Read the article

  • OSB and Ubuntu 10.04 - Too Many Open Files

    - by jeff.x.davies
    When installing the latest Oracle Service Bus (11gR1PS3) onto my Ubuntu 10.04 system, the Eclipse IDE was complaining about there being too many open files. The Oracle Service Bus and the Oracle Enterprise Pack for Eclipse (aka OEPE) do make use of ALOT of files. By default, Ubuntu will restrict each user to 1024 open files. A much more realistic number for OSB development is 4096. Changing the file limit in Ubuntu is fairly simple (if arcane). You will need to modify two different files and then restart your server. First, you need to modify the limits.conf file as the root user. Open a terminal window and enter the following command: sudo gedit /etc/security/limits.conf Add the following 2 lines to the file. The asterisk simply means that the rule will apply to all users. * soft nofile 4096 * hard nofile 4096 Save your changes and close gedit. The second file to change is the common-session file. Use the following command: sudo gedit /etc/pam.d/common-session Add the following line: session required pam_limits.so Save the file and exit gedit. Restart your machine. You shouldn't have any more problems with too many open files anymore.

    Read the article

  • Brand New Annotations Support

    - by Ondrej Brejla
    Hi all! Today we would like to introduce you our brand new annotation support for NetBeans 7.2. The first thing which is different is the look of annotations in code completion. As you can see, there is a new annotation icon and an annotation type. Because we have a lot of modules with their own annotations, we differ them in code completion window by their type. We support annotations for: ApiGen (legacy PHPDoc annotations), PHPUnit, Doctrine 2 (ORM and ODM) and Symfony 2. Every annotation can be associated with some context. We recognize four of them: function, class/interface (type), method and field. It means that you will get just proper annotations for your class field as well as your global function. Do you have your own annotations? Or do you simply miss some? There is nothing hard to add it in there. We have a simple UI for adding your custom annotations! It's in Tools -> Options -> PHP -> Annotations. Here you can simply add, edit or delete your annotations. When you try to create new one, all fields are prefilled by some default values. So you really don't have to remember "how to use that crazy freemarker syntax". If you are satisfied with your new annotation, you can see it in a code completion window among other annotations. As you can see it has its own "Custom" type. That's all for today and as usual, please test it and if you find something strange, don't hesitate to file a new issue (component php, subcomponent Editor). Thanks.

    Read the article

  • Client side latency when using prediction

    - by Tips48
    I've implemented Client-Side prediction into my game, where when input is received by the client, it first sends it to the server and then acts upon it just as the server will, to reduce the appearance of lag. The problem is, the server is authoritative, so when the server sends back the position of the Entity to the client, it undo's the effect of the interpolation and creates a rubber-banding effect. For example: Client sends input to server - Client reacts on input - Server receives and reacts on input - Server sends back response - Client reaction is undone due to latency between server and client To solve this, I've decided to store the game state and input every tick in the client, and then when I receive a packet from the server, get the game state from when the packet was sent and simulate the game up to the current point. My questions: Won't this cause lag? If I'm receiving 20/30 EntityPositionPackets a second, that means I have to run 20-30 simulations of the game state. How do I sync the client and server tick? Currently, I'm sending the milli-second the packet was sent by the server, but I think it's adding too much complexity instead of just sending the tick. The problem with converting it to sending the tick is that I have no guarantee that the client and server are ticking at the same rate, for example if the client is an old-end PC.

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >