Search Results

Search found 3181 results on 128 pages for 'david winchester'.

Page 18/128 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • No NFC for the iPhone, and here's why

    - by David Dorf
    I, like many others in the retail industry, was hoping the iPhone 5 would include an NFC chip that enabled a mobile wallet.  In previous postings I've discussed the possible business case and the foreshadowing of Passbook, but it wasn't meant to be.  A few weeks ago I was considering all the rumors, and it suddenly occurred to me that it wasn't in Apple's best interest to support an NFC chip.  Yes they have patents in this area, but perhaps they are more defensive than indicating new development. Steve Jobs wanted to always win, but more importantly he didn't want others to win at his expense.  It drove him nuts that Windows was more successful than MacOS, and clearly he was bothered by Samsung and other handset manufacturers copying the iPhone.  But he was most angry at Google for their stewardship of Android. If the iPhone 5 had an NFC chip, who would benefit most?  Google Wallet is far and away the leader in NFC-based payments via mobile phones in the US.  Even without Steve at the helm, Apple isn't going to do anything to help Google.  Plus Apple doesn't like to do things in an open way -- then they lose control.  For example, you don't see iPhones with expandable memory, replaceable batteries, or USB connectors.  Adding a standards-based NFC chip just isn't in their nature. So I don't think Apple is holding back on the NFC chip for the 5S or 6.  It just isn't going to happen unless they can figure out how to prevent others from benefiting from it. All the other handset manufacturers will use NFC as a differentiator, which may be enough to keep Google and Isis afloat, and of course Square and PayPal aren't betting on NFC anyway.  This isn't the end of alternative payments, its just a major speed bump.

    Read the article

  • How to fix bad Collada produced by FBX?

    - by David
    I tried to use the FBX SDK (2011.3.1) to load FBX files and save them as Collada files in order to be able to import FBX files in Panda3D. Unfortunately the resulting Collada files are not usable for several reasons, among them: There's a Maya specific extra technique diffuse <diffuse> <texture texture="Map__2-image" texcoord="CHANNEL0"> <extra> <technique profile="MAYA"> <wrapU sid="wrapU0">TRUE</wrapU> <wrapV sid="wrapV0">TRUE</wrapV> <blend_mode>ADD</blend_mode> </technique> </extra> </texture> </diffuse> It assigns a texcoord channel name that isn't referenced anywhere else in the file (in the previous code sample, no geometry uses "CHANNEL0"...) Every polygon is exported twice, a first time with a basic material (only diffuse color, specular color, etc.) and a second time with a textured material -- this doubles the number of polygons of each model without any valuable reason Anyway, the resulting Collada file cannot be opened correctly either with OpenCOLLADA or Panda3D's "dae2egg". Anyone has any experience on how to "fix" it and make it understandable by common and well-reputed Collada importers such as OpenCOLLADA?

    Read the article

  • What's Old is New Again

    - by David Dorf
    Last night I told my son he could stream music to his tablet "from the cloud" (in this case, the Amazon Cloud).  He paused, then said, "what is the cloud?"  I replied, "a bunch of servers connected to the internet."  Apparently he had visions of something much more magnificent.  Another similar term is "big data."  These marketing terms help to quickly convey topics but are oversimplifications that are open to many interpretations.  At their core, those terms a shiny packages holding recycled ideas. I see many headlines declaring big data changes everything, but it doesn't.  Savvy retailers have been dealing with large volumes of data since the electronic cash register was invented.  But the there have a been a few changes to the landscape that make big data a topic of conversation: 1. Computing power has caught up to storage volumes. Its now possible to more thoroughly analyze the copious volumes of data retailers have been squirreling away.  CPUs are faster, sold state drives more plentiful, and new ways to store and search data are available.  My iPhone is more power than the computer used in the Apollo mission to the moon. 2. Unstructured data is everywhere.  The Web used to be where retailers published product information, but now users are generating the bulk of the content in the form of comments, videos, and "likes."  The variety of information available to retailers is huge, and it meaning difficult to discern. 3. Everything is connected.  Looking at a report from my router, there are no less than 20 active devices on my home network.  We can track the location of mobile phones, tag products with RFID, and set our thermostats (I love my Nest) from a thousand miles away.  Not only is there more data, but its arriving at higher velocity. Careful readers will note the three Vs that help define so-called big data: volume, variety, and velocity. We now have more volume, more variety, and more velocity and different technologies to deal with them.  But at the heart, the objectives are still the same: Informed decisions Accurate forecasts Improved optimizations So don't let the term "big data" throw you off the scent.  Retailers still need to execute on the basics.  But do take a fresh look at the data that's available and the new technologies to process it.  The landscape will continue to change and agile organizations will always be reevaluating their approaches.  You can just add some more weapons to the arsenal.

    Read the article

  • Error Copying Source File in Audio Spectrum Visualizer [closed]

    - by David Dimalanta
    I'm testing this code using LibGDX, Java, and Eclipse to test the music player that detects the frequency. I saw this one on this website plus the link on GitHub: http://gtomee.com/2012/07/28/audio-spectrum-visualizer-with-libgdx/ It works when running on desktop project folder but not on Android project folder and the result is this: 10-10 13:57:45.320: E/AndroidRuntime(9421): FATAL EXCEPTION: GLThread 16845 10-10 13:57:45.320: E/AndroidRuntime(9421): com.badlogic.gdx.utils.GdxRuntimeException: Error copying source file: soundtrack 1 bioman.mp3 (Internal) 10-10 13:57:45.320: E/AndroidRuntime(9421): To destination: tmp/audio-spectrum.mp3 (External) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.copyFile(FileHandle.java:625) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.copyTo(FileHandle.java:534) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.bodapps.rhythm.Drop.create(Drop.java:393) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.backends.android.AndroidGraphics.onSurfaceChanged(AndroidGraphics.java:292) 10-10 13:57:45.320: E/AndroidRuntime(9421): at android.opengl.GLSurfaceView$GLThread.guardedRun(GLSurfaceView.java:1505) 10-10 13:57:45.320: E/AndroidRuntime(9421): at android.opengl.GLSurfaceView$GLThread.run(GLSurfaceView.java:1240) 10-10 13:57:45.320: E/AndroidRuntime(9421): Caused by: com.badlogic.gdx.utils.GdxRuntimeException: Error stream writing to file: tmp/audio-spectrum.mp3 (External) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.write(FileHandle.java:313) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.copyFile(FileHandle.java:623) 10-10 13:57:45.320: E/AndroidRuntime(9421): ... 5 more 10-10 13:57:45.320: E/AndroidRuntime(9421): Caused by: com.badlogic.gdx.utils.GdxRuntimeException: Error writing file: tmp/audio-spectrum.mp3 (External) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.write(FileHandle.java:293) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.write(FileHandle.java:305) 10-10 13:57:45.320: E/AndroidRuntime(9421): ... 6 more 10-10 13:57:45.320: E/AndroidRuntime(9421): Caused by: java.io.FileNotFoundException: /storage/sdcard0/tmp/audio-spectrum.mp3: open failed: EACCES (Permission denied) 10-10 13:57:45.320: E/AndroidRuntime(9421): at libcore.io.IoBridge.open(IoBridge.java:416) 10-10 13:57:45.320: E/AndroidRuntime(9421): at java.io.FileOutputStream.<init>(FileOutputStream.java:88) 10-10 13:57:45.320: E/AndroidRuntime(9421): at com.badlogic.gdx.files.FileHandle.write(FileHandle.java:289) 10-10 13:57:45.320: E/AndroidRuntime(9421): ... 7 more 10-10 13:57:45.320: E/AndroidRuntime(9421): Caused by: libcore.io.ErrnoException: open failed: EACCES (Permission denied) 10-10 13:57:45.320: E/AndroidRuntime(9421): at libcore.io.Posix.open(Native Method) 10-10 13:57:45.320: E/AndroidRuntime(9421): at libcore.io.BlockGuardOs.open(BlockGuardOs.java:110) 10-10 13:57:45.320: E/AndroidRuntime(9421): at libcore.io.IoBridge.open(IoBridge.java:400) 10-10 13:57:45.320: E/AndroidRuntime(9421): ... 9 more I'm not sure if I come this to the right place for help and suggestions.

    Read the article

  • Building ATLAS (and later Octave w/ ATLAS)

    - by David Parks
    I'm trying to set up ATLAS (so I can later compile octave with ATLAS support). If I'm correct, I still need to build this manually due to the environment specific optimizations. I do see a package for ATLAS, but it looks like it's using the cross platform generic build options (e.g. "it'll be slow"). So, running the configure script as described in the docs seems to go poorly. As a java developer I never do well at making heads or tails of errors in these build processes. Am I missing dependencies (if so is there any documentation on what I need)? allusers@vbubuntu:~/Downloads/atlas3.10.1/build_vbubuntu$ ../configure -b 64 -D c -DPentiumCPS=3000 --with-netlib-lapack-tarfile=/home/allusers/Downloads/lapack-3.5.0.tgz make: `xconfig' is up to date. ./xconfig -d s /home/allusers/Downloads/atlas3.10.1/build_vbubuntu/../ -d b /home/allusers/Downloads/atlas3.10.1/build_vbubuntu -b 64 -D c -DPentiumCPS=3000 -Si lapackref 1 OS configured as Linux (1) Assembly configured as GAS_x8664 (2) Vector ISA Extension configured as SSE3 (6,448) ERROR: enum fam=3, chip=2, mach=0 make[3]: *** [atlas_run] Error 44 make[2]: *** [IRunArchInfo_x86] Error 2 Architecture configured as Corei1 (25) ERROR: enum fam=3, chip=2, mach=0 make[3]: *** [atlas_run] Error 44 make[2]: *** [IRunArchInfo_x86] Error 2 Clock rate configured as 2350Mhz ERROR: enum fam=3, chip=2, mach=0 make[3]: *** [atlas_run] Error 44 make[2]: *** [IRunArchInfo_x86] Error 2 Maximum number of threads configured as 4 Parallel make command configured as '$(MAKE) -j 4' ERROR: enum fam=3, chip=2, mach=0 make[3]: *** [atlas_run] Error 44 make[2]: *** [IRunArchInfo_x86] Error 2 Cannot detect CPU throttling. rm -f config1.out make atlas_run atldir=/home/allusers/Downloads/atlas3.10.1/build_vbubuntu exe=xprobe_comp redir=config1.out \ args="-v 0 -o atlconf.txt -O 1 -A 25 -Si nof77 0 -V 448 -b 64 -d b /home/allusers/Downloads/atlas3.10.1/build_vbubuntu" make[1]: Entering directory `/home/allusers/Downloads/atlas3.10.1/build_vbubuntu' cd /home/allusers/Downloads/atlas3.10.1/build_vbubuntu ; ./xprobe_comp -v 0 -o atlconf.txt -O 1 -A 25 -Si nof77 0 -V 448 -b 64 -d b /home/allusers/Downloads/atlas3.10.1/build_vbubuntu > config1.out make[2]: gfortran: Command not found make[2]: *** [IRunF77Comp] Error 127 make[2]: g77: Command not found make[2]: *** [IRunF77Comp] Error 127 make[2]: f77: Command not found make[2]: *** [IRunF77Comp] Error 127 Unable to find usable compiler for F77; abortingMake sure compilers are in your path, and specify good compilers to configure (see INSTALL.txt or 'configure --help' for details)make[1]: *** [atlas_run] Error 8 make[1]: Leaving directory `/home/allusers/Downloads/atlas3.10.1/build_vbubuntu' make: *** [IRun_comp] Error 2 ERROR 512 IN SYSCMND: 'make IRun_comp args="-v 0 -o atlconf.txt -O 1 -A 25 -Si nof77 0 -V 448 -b 64"' mkdir src bin tune interfaces mkdir: cannot create directory ‘src’: File exists mkdir: cannot create directory ‘bin’: File exists mkdir: cannot create directory ‘tune’: File exists mkdir: cannot create directory ‘interfaces’: File exists make: *** [make_subdirs] Error 1 make -f Make.top startup make[1]: Entering directory `/home/allusers/Downloads/atlas3.10.1/build_vbubuntu' Make.top:1: Make.inc: No such file or directory Make.top:325: warning: overriding commands for target `/AtlasTest' Make.top:76: warning: ignoring old commands for target `/AtlasTest' make[1]: *** No rule to make target `Make.inc'. Stop. make[1]: Leaving directory `/home/allusers/Downloads/atlas3.10.1/build_vbubuntu' make: *** [startup] Error 2 mv: cannot move ‘lapack-3.5.0’ to ‘../reference/lapack-3.5.0’: Directory not empty mv: cannot stat ‘lib/Makefile’: No such file or directory ../configure: 450: ../configure: cannot create lib/Makefile: Directory nonexistent ../configure: 451: ../configure: cannot create lib/Makefile: Directory nonexistent ../configure: 452: ../configure: cannot create lib/Makefile: Directory nonexistent ../configure: 453: ../configure: cannot create lib/Makefile: Directory nonexistent ../configure: 509: ../configure: cannot create lib/Makefile: Directory nonexistent DONE configure

    Read the article

  • Master Data Management for Location Data - Oracle Site Hub

    - by david.butler(at)oracle.com
    Most MDM discussions cover key domains such as customer, supplier, product, service, and reference data. It is usually understood that these domains have complex structures and hundreds if not thousands of attributes that need governing. Location, on the other hand, strikes most people as address data. How hard can that be? But for many industries, locations are complex, and site information is critical to efficient operations and relevant analytics. Retail stores and malls, bank branches, construction sites come to mind. But one of the best industries for illustrating the power of a site mastering application is Oil & Gas.   Oracle's Master Data Management solution for location data is the Oracle Site Hub. It is a location mastering solution that enables organizations to centralize site and location specific information from heterogeneous systems, creating a single view of site information that can be leveraged across all functional departments and analytical systems.   Let's take a look at the location entities the Oracle Site Hub can manage for the Oil & Gas industry: organizations, property, land, buildings, roads, oilfield, service center, inventory site, real estate, facilities, refineries, storage tanks, vendor locations, businesses, assets; project site, area, well, basin, pipelines, critical infrastructure, offshore platform, compressor station, gas station, etc. Any site can be classified into multiple hierarchies, like organizational hierarchy, operational hierarchy, geographic hierarchy, divisional hierarchies and so on. Any site can also be associated to multiple clusters, i.e. collections of sites, and these can be used as a foundation for driving reporting, analysis, organize daily work, etc. Hierarchies can also be used to model entities which are structured or non-structured collections of nodes, like for example routes, pipelines and more. The User Defined Attribute Framework provides the needed infrastructure to add single row attributes groups like well base attributes (well IDs, well type, well structure and key characterizing measures, and more) and well geometry, and multi row attribute groups like well applications, permits, production data, activities, operations, logs, treatments, tests, drills, treatments, and KPIs. Site Hub can also model areas, lands, fields, basins, pools, platforms, eco-zones, and stratigraphic layers as specific sites, tracking their base attributes, aliases, descriptions, subcomponents and more. Midstream entities (pipelines, logistic sites, pump stations) and downstream entities (cylinders, tanks, inventories, meters, partner's sites, routes, facilities, gas stations, and competitor sites) can also be easily modeled, together with their specific attributes and relationships. Site Hub can store any type of unstructured data associated to a site. This could be stored directly or on an external content management solution, like Oracle Universal Content Management. Considering a well, for example, Site Hub can store any relevant associated multimedia file such as: CAD drawings of the well profile, structure and/or parts, engineering documents, contracts, applications, permits, logs, pictures, photos, videos and more. For any site entity, Site Hub can associate all the related assets and equipments at the site, as well as all relationships between sites, between a site and multiple parties, and between a site and any purchasable or sellable item, over time. Items can be equipment, instruments, facilities, services, products, production entities, production facilities (pipelines, batteries, compressor stations, gas plants, meters, separators, etc.), support facilities (rigs, roads, transmission or radio towers, airstrips, etc.), supplier products and services, catalogs, and more. Items can just be associated to sites using standard Site Hub features, or they can be fully mastered by implementing Oracle Product Hub. Site locations (addresses or geographical coordinates) are also managed with out-of-the-box address geo-coding capabilities coupled with Google Maps integration to deliver powerful mapping capabilities and spatial data analysis. Locations can be shared between different sites. Centered on the site location, any site can also have associated areas. Site Hub can master any site location specific information, like for example cadastral, ownership, jurisdictional, geological, seismic and more, and any site-centric area specific information, like for example economical, political, risk, weather, logistic, traffic information and more. Now if anyone ever asks you why locations need MDM, think about how all these Oil & Gas entities and attributes would translate into your business locations. To learn more about Oracle's full MDM solution for the digital oil field, here is a link to Roberto Negro's outstanding whitepaper: Oracle Site Master Data Management for mastering wells and other PPDM entities in a digital oilfield context  

    Read the article

  • Best practices for caching search queries

    - by David Esteves
    I am trying to improve performance of my ASP.net Web Api by adding a data cache but I am not sure how exactly to go about it as it seems to be more complex than most caching scenarios. An example is I have a table of Locations and an api to retrieve locations via search, for an autocomplete. /api/location/Londo and the query would be something like SELECT * FROM Locations WHERE Name like 'Londo%' These locations change very infrequently so I would like to cache them to prevent trips to the database for no real reason and improve the response time. Looking at caching options I am using the Windows Azure Appfabric system, the problem is it's just a key/value cache. Since I can only retrieve items based on keys I couldn't actually use it for this scenario as far as Im aware. Is what I am trying to do bad use of a caching system? Should I try looking into NoSql DB which could possibly run as a cache for something like this to improve performance? Should I just cache the entire table/collection in a single key with a specific data structure which could assist with the searching and then do the search upon retrieval of the data?

    Read the article

  • Slow Start For Passbook

    - by David Dorf
    Like many others, I pre-ordered my iPhone 5 then downloaded iOS 6 to my antiquated iPhone 4.  I decided the downgrade in mapping capabilities was worth access to Passbook, Apple's wallet of sorts that holds loyalty cards, tickets, and coupons.  To my disappointment, Passbook didn't work.  When it goes to the iTunes Store, it can't connect.  After a little research, I read that you can change the date on the iPhone to the future (I did March 2013), and then it will connect.  A list of apps that support Passbook are shown, some of which were already on my iPhone and others that required downloading.  Even when I put the date back on "automatic," things continued to work.  Not sure why. Anyway, even once I got into iTunes and made sure I had some of the apps downloaded, it wasn't clear what the next step was (gimme a break, its Friday afternoon).  Every time I opened Passbook, it sent me to the "Apps for Passbook" page on iTunes.  I tried downloading one of the suggested apps that I didn't already have (Walgreens).  The app's icon has a "new" stripe across the icon.  I launched it and it said it had Passbook integration. So I needed to login or signup with the loyalty program.  After figuring out what my username and password already was, it then offered to add the loyalty card to Passbook, which I accepted.  Now when I flip over to Passbook, I can see the loyalty card there.  I guess I need to go into each app to "push" cards into Passbook. People seem to be using it.  Twenty-four hours after iOS 6 was released, Sephora had 20,000 users of Passbook. Starbucks says they'll be integrated to Passbook by the end of the month, and Target is already offering coupons via Passbook.  After a few more retailers get on board, Apple may not need to consider NFC.

    Read the article

  • A good example project to 'prove' my skills [closed]

    - by David Archer
    I've been a commercial programmer for about 3 years now but all of my commercial work is based upon PHP (with Cake PHP, Wordpress and Wildfire) and ASP.Net (on C#, including MVC 3, Umbraco and Kentico) as well as plenty of HTML/CSS/jQuery examples to show. A future employer has asked me to show my Ruby on Rails potential. I've done Ruby on Rails before for fun, but nothing worthy of commercial showing. What I'd like to know, from a group of programmers, is what would be a good 'portfolio demo' piece for you? What have you seen in the past that impressed you? What are you looking for? For Ruby lead developers specifically, what sort of things are you looking to see in the code? Cheers!

    Read the article

  • Highlight current page tab [migrated]

    - by Jose David Garcia Llanos
    I am making a website, currently i am setting the highlight tab for current page, a particular page is not highlighting its tab and i have checked the code about 5 times but i cant find anything wrong with it. the website is auto-sal.es Here is the code: style.css body#home a.hometab, body#cars a.cartab, body#feedback a.feedtab, body#contact a.contacttab, body#members a.memberstab {background: #7D0000;} contactus.html <body id="contact"> navigation <ul id="menu"> <li><a href="index.html" target="_self" class="hometab">Home</a></li> <li><a href="cars.html" target="_self" class="cartab">Cars</a></li> <li><a href="feedback.html" target="_self" class="feedtab">Feedback</a></li> <li><a href="contactus.html" target="_self" class="cotacttab">Contact Us</a></li> <li><a href="members.html" target="_self" class="memberstab">Members</a></li> </ul> Again, the issue is that it is not highlighting the tab for contact us

    Read the article

  • Open Source AI Bot interfaces

    - by David Young
    What are some open source AI Bot interfaces? Similar to Pogamut 3 GameBots2004 for custom Unreal Tournament bots or Brood Wars API for Starcraft bots etc. If you could please post one AI bot interface per answer (make sure to provide a link) and give a brief summary as to the content of the blog posts. Please include what type of bot interface structure it is, client/server, server/server, etc e.g. BWAPI is client/server which emulates a real player

    Read the article

  • Good questions to ask a potential new boss?

    - by David Johnstone
    I first asked this question on Stack Overflow, but it turns out this is a better place for it. Imagine you were working as a software developer. Imagine that the manager of your team leaves and your company is looking for a replacement. Imagine that as part of the hiring process you had the opportunity to talk with him. You are not the only person doing an interview, and while it is not ultimately your decision whether or not to hire him, you do have an influence. What questions would you ask? What would you talk with him about?

    Read the article

  • Secure login for a game that is open source

    - by David Park
    I am making a game which i will be open sourcing. Its a simple arcade like game but requires a network connection because it is meant to be played with other people. The thing i am worrying about is how would i be sure that the client is the one that i put out for the end user to play with? Kind of a like of sv_pure for Team Fortress 2. I was thinking of different ways to combat this such as the server requesting the client's version or even it's md5 hash but people with simple java knowledge could just force a method to always return what the server wants.

    Read the article

  • Attend COLLABORATE 11 Virtualy

    - by david.stokes(at)oracle.com
    Stay connected to one of the leading Oracle training and educational events - COLLABORATE 11 - IOUG Forum. Join virtually by attending Plug-In to Orlando for just $299.  Oracle and other leading industry experts will present over 40 hours of live presentations on topics such as Database, Development, Business Intelligence, Security, Data Warehousing and more.For a full list of scheduled Plug-in sessions, click here. Register now and enter the priority code PC07 to claim your group license

    Read the article

  • The Apple Passbook

    - by David Dorf
    In a previous job I worked on smart card systems.  Our vision was to replace the physical wallet with a chip card that contained stored value, credit cards, and loyalty cards.  The technology was up to the task, but the business model never worked out.  When all those things go onto a single card, who owns the card and maintains the applications?  Each bank wanted their own card with branding, so instead of consolidating lots of cards onto one, we ended up with the same number of cards, just more expensive chip cards.  The Costanza wallet would not die. More recently I've been able to move lots of these cards into iOS apps using products like CardStar, TripIt, and Fandango.  I guess moving from physical to digital is progress, but still no consolidation.  But this week Apple announced its Passbook, an iOS feature that consolidates boarding passes, loyalty cards, and movie tickets.  Another step in the right direction. We've been waiting for Apple to announce a NFC solution to take advantage of the 400 million credit cards it stores in iTunes for its customers.  Perhaps Passbook is the first step in that direction.  It wouldn't take much to add credit cards to Passbook, then enable secure transfer of the track data using a NFC equipped iPhone.  I've got to think this has to be part of the larger vision, but of course Apple is very secretive. I think the steps will be loyalty, coupons, and then payment when it comes to the evolving Passbook.  Retailers should keep an eye on Apple, and expect these things to happen in the Apple stores first.

    Read the article

  • What is the rationale behind Apache Jena's *everything is an interface if possible* design philosophy?

    - by David Cowden
    If you are familiar with the Java RDF and OWL engine Jena, then you have run across their philosophy that everything should be specified as an interface when possible. This means that a Resource, Statement, RDFNode, Property, and even the RDF Model, etc., are, contrary to what you might first think, Interfaces instead of concrete classes. This leads to the use of Factories quite often. Since you can't instantiate a Property or Model, you must have something else do it for you --the Factory design pattern. My question, then, is, what is the reasoning behind using this pattern as opposed to a traditional class hierarchy system? It is often perfectly viable to use either one. For example, if I want a memory backed Model instead of a database-backed Model I could just instantiate those classes, I don't need ask a Factory to give me one. As an aside, I'm in the process of writing a library for manipulating Pearltrees data, which is exported from their website in the form of an RDF/XML document. As I write this library, I have many options for defining the relationships present in the Peartrees data. What is nice about the Pearltrees data is that it has a very logical class system: A tree is made up of pearls, which can be either Page, Reference, Alias, or Root pearls. My question comes from trying to figure out if I should adopt the Jena philosophy in my library which uses Jena, or if I should disregard it, pick my own design philosophy, and stick with it.

    Read the article

  • Customer Experience Management : A conversation with world experts RTD

    - by David lefranc
    A conversation with world experts in Customer Experience Management in Rome, Italy - Wed, June 20, 2012 It is our pleasure to share the registration link below for your chance to meet active members of the Oracle Real-Time Decisions Customer Advisory Board. Join us to hear how leading brands across the world have achieved tremendous return on investment through their Oracle Real-Time Decisions deployments and do not miss this unique opportunity to ask them specific questions directly during our customer roundtable. Please share this information with anyone interested in real-time decision management and cross-channel predictive process optimization.http://www.oracle.com/goto/RealTimeDecisions

    Read the article

  • Customer Experience Management : A conversation with world experts RTD

    - by David lefranc
    A conversation with world experts in Customer Experience Management in Rome, Italy - Wed, June 20, 2012 It is our pleasure to share the registration link below for your chance to meet active members of the Oracle Real-Time Decisions Customer Advisory Board. Join us to hear how leading brands across the world have achieved tremendous return on investment through their Oracle Real-Time Decisions deployments and do not miss this unique opportunity to ask them specific questions directly during our customer roundtable. Please share this information with anyone interested in real-time decision management and cross-channel predictive process optimization.http://www.oracle.com/goto/RealTimeDecisions

    Read the article

  • ASP.NET Membership Password Hash -- .NET 3.5 to .NET 4 Upgrade Surprise!

    - by David Hoerster
    I'm in the process of evaluating how my team will upgrade our product from .NET 3.5 SP1 to .NET 4. I expected the upgrade to be pretty smooth with very few, if any, upgrade issues. To my delight, the upgrade wizard said that everything upgraded without a problem. I thought I was home free, until I decided to build and run the application. A big problem was staring me in the face -- I couldn't log on. Our product is using a custom ASP.NET Membership Provider, but essentially it's a modified SqlMembershipProvider with some additional properties. And my login was failing during the OnAuthenticate event handler of my ASP.NET Login control, right where it was calling my provider's ValidateUser method. After a little digging, it turns out that the password hash that the membership provider was using to compare against the stored password hash in the membership database tables was different. I compared the password hash from the .NET 4 code line, and it was a different generated hash than my .NET 3.5 code line. (Tip -- when upgrading, always keep a valid debug copy of your app handy in case you have to step through a lot of code.) So it was a strange situation, but at least I knew what the problem was. Now the question was, "Why was it happening?" Turns out that a breaking change in .NET 4 is that the default hash algorithm changed to SHA256. Hey, that's great -- stronger hashing algorithm. But what do I do with all the hashed passwords in my database that were created using SHA1? Well, you can make two quick changes to your app's web.config and everything will be OK. Basically, you need to override the default HashAlgorithmTypeproperty of your membership provider. Here are the two places to do that: 1. At the beginning of your element, add the following element: <system.web> <machineKey validation="SHA1" /> ... </system.web> 2. On your element under , add the following hashAlgorithmType attribute: <system.web> <membership defaultProvider="myMembership" hashAlgorithmType="SHA1"> ... </system.web> After that, you should be good to go! Hope this helps.

    Read the article

  • NRF Big Show 2011 -- Part 2

    - by David Dorf
    One of the things I love about attending NRF is visiting the smaller booths to see what new innovative ideas have sprung up. After all, by watching emerging technologies we can get a sense of how the retail experience might change. After NRF I'm hoping to write a post on what I found, if anything, so be sure to check back. At the Oracle Retail booth we'll be demonstrating some of the aspects of the changing retail experience. These demos use a mix of GA and experimental components. Here are some highlights: 1. Checkin We wrote a consumer iPhone app we call Store Gateway that lets consumers access information from the store. They'll start by doing a checkin when they arrive that will alert the store manager via another iPhone app we wrote called Mobile Manager. Additionally, we display a welcome messaging using Starmount's digital sign. 2. Receive Offers There are three interaction points where a store can easily make an offer to a consumer: checkin, product scans, and checkout. For this demo we're calling our Universal Offer Engine at checkin to determine the best offer for this particular consumer. This offer is then displayed on the consumer's phone as well as on the digital sign. 3. Scan Products To thwart consumers from scanning product barcodes, we used Store Inventory Management to print QRCodes on shelf label then provided access to a scanner in the Store Gateway iphone app. When the consumer scans the shelf label they are shown product information provided by the retailer. 4. Checkout While we don't have a NFC-enabled mobile phone, we have a NFC chip that can attach to a phone. We're using this to checkout using a reader provided by ViVOTech. Tap the phone on the reader, and the POS accesses the customer#, coupons, and payment information. This really speeds the checkout process. 5. Digital Receipt After the transaction is complete, a digital copy of the receipt is sent to Intuit's QuickReceipts where consumers to store all their digital receipts. There's even an iPhone app that provides easy access to the receipts. This covers about half of what what we'll be showing, so be sure to stop by. I'll also be talking about how mobile is impacting the retail experience at the Wednesday morning session NRF Mobile Retail Initiative: a Blueprint for Action. See you at the Big Show!

    Read the article

  • Is there a way to disallow only crawling in https in robots.txt?

    - by David Wilkins
    I just realized that Bingbot is crawling my company's website's pages over https. Bing already crawls the site over http, so this seems frivolous. Is there a way to specify Disallow: / for https only? According to Wikipedia, each protocol has its own robots.txt And according to Google's Robots.txt Specification, the robots.txt applies to http AND https I don't want to Disallow: / for Bing totally, just over https.

    Read the article

  • Naming the Weapons and Designing Weapons Based in Real-life During Game Development [duplicate]

    - by David Dimalanta
    This question already has an answer here: Do you need a license for weapon models? 6 answers Is it legit or copyright safe if I name the actual name of the gun model such as AK-47, M16, Remington 870, and so on? I'm on the works for making a simple 2D 3rd-person shooter game. One of the examples is the Counter Strike and the game listed the name of weapons based on the real life models and so developers decided to created this named it for the weapon designs. If not, should I make either falsify the name of weapons (e.g. 9mm instead of Glock 17 from a Syphon Filter game) or make fictional weapons like the ones developed behind Halo games?

    Read the article

  • Code Measuring and Metrics Tools?

    - by David
    I'm in the process of setting up a build server for personal projects. This server will handle all the normal CI stuff, including running large suites of tests (unit, integration, automated UI). While I'm working out the kinks for including code coverage output with MSTest, it occurs to me that there may be lots of tools out there which give me additional metrics other than just code coverage. FxCop comes to mind as an example. Though I'm sure there are others. Anything that can generate useful reportable data and metrics would be good. Whether it's class dependency charts (looking for Law of Demeter violations, for example), analyses of the uses of classes/functions (looking for a function that isn't used in the system other than just the tests, for example), and so on. I'm not sure the right way to formulate the question, since polling questions or "What's your favorite code analysis tool" aren't very good. But I'm essentially just looking for recommendations on what metrics to gather and the tools that can gather them. The eventual vision for something like this is to have the CI server run a bunch of automated tests and analysis tools and track performance metrics over time. Imagine a dashboard full of graphs plotting these metrics over time. The lines should all relatively be at an equilibrium, and if one starts to stray toward the negative then it's an early indication of problems with the code. In the age old struggle to quantify code quality with management, this sounds like a potentially helpful means of doing just that.

    Read the article

  • CRM vs VRM

    - by David Dorf
    In a previous post, I discussed the potential power of combining social, interest, and location graphs in order to personalize marketing and shopping experiences for consumers.  Marketing companies have been trying to collect detailed information for that very purpose, a large majority of which comes from tracking people on the internet.  But their approaches stem from the one-way nature of traditional advertising.  With TV, radio, and magazines there is no opportunity to truly connect to customers, which has trained marketing companies to [covertly] collect data and segment customers into easily identifiable groups.  To a large extent, we think of this as CRM. But what if we turned this viewpoint upside-down to accommodate for the two-way nature of social media?  The notion of marketing as conversations was the basis for the Cluetrain, an early attempt at drawing attention to the fact that customers are actually unique humans.  A more practical implementation is Project VRM, which is a reverse CRM of sorts.  Instead of vendors managing their relationships with customers, customers manage their relationships with vendors. Your shopping experience is not really controlled by you; rather, its controlled by the retailer and advertisers.  And unfortunately, they typically don't give you a say in the matter.  Yes, they might tailor the content for "female age 25-35 interested in shoes" but that's not really the essence of you, is it?  A better approach is to the let consumers volunteer information about themselves.  And why wouldn't they if it means a better, more relevant shopping experience?  I'd gladly list out my likes and dislikes in exchange for getting rid of all those annoying cookies on my harddrive. I really like this diagram from Beyond SocialCRM as it captures the differences between CRM and VRM. The closest thing to VRM I can find is Buyosphere, a start-up that allows consumers to track their shopping history across many vendors, then share it appropriately.  Also, Amazon does a pretty good job allowing its customers to edit their profile, which includes everything you've ever purchased from Amazon.  You can mark items as gifts, or explicitly exclude them from their recommendation engine.  This is a win-win for both the consumer and retailer. So here is my plea to retailers: Instead of trying to infer my interests from snapshots of my day, please just ask me.  We'll both have a better experience in the long-run.

    Read the article

  • Oracle MDM Maturity Model

    - by David Butler
    A few weeks ago, I discussed the results of a survey conducted by Oracle’s Insight team. The survey was based on the data management maturity model that the Oracle Insight team has developed over the years as they analyzed customer IT organizations to help them get more out of everything they already have. I thought you might like to learn more about the maturity model itself. It can help you figure out where you stand when it comes to getting your organizations data management act together. The model covers maturity levels around five key areas: Profiling data sources; Defining a data strategy; Defining a data consolidation plan; Data maintenance; and Data utilization. Profile data sources: Profiling data sources involves taking an inventory of all data sources from across your IT landscape. Then evaluate the quality of the data in each source system. This enables the scoping of what data to collect into an MDM hub and what rules are needed to insure data harmonization across systems. Define data strategy: A data strategy requires an understanding of the data usage. Given data usage, various data governance requirements need to be developed. This includes data controls and security rules as well as data structure and usage policies. Define data consolidation strategy: Consolidation requires defining your operational data model. How integration is to be accomplished. Cross referencing common data attributes from multiple systems is needed. Synchronization policies also need to be developed. Data maintenance: The desired standardization needs to be defined, including what constitutes a ‘match’ once the data has been standardized. Cleansing rules are a part of this methodology. Data quality monitoring requirements also need to be defined. Utilize the data: What data gets published, and who consumes the data must be determined. How to get the right data to the right place in the right format given its intended use must be understood. Validating the data and insuring security rules are in place and enforced are crucial aspects for full no-risk data utilization. For each of the above data management areas, a maturity level needs to be assessed. Where your organization wants to be should also be identified using the same maturity levels. This results in a sound gap analysis your organization can use to create action plans to achieve the ultimate goals. Marginal is the lowest level. It is characterized by manually maintaining trusted sources; lacking or inconsistent, silo’d structures with limited integration, and gaps in automation. Stable is the next leg up the MDM maturity staircase. It is characterized by tactical MDM implementations that are limited in scope and target a specific division.  It includes limited data stewardship capabilities as well. Best Practice is a serious MDM maturity level characterized by process automation improvements. The scope is enterprise wide. It is a business solution that provides a single version of the truth, with closed-loop data quality capabilities. It is typically driven by an enterprise architecture group with both business and IT representation.   Transformational is the highest MDM maturity level. At this level, MDM is quantitatively managed. It is integrated with Business Intelligence, SOA, and BPM. MDM is leveraged in business process orchestration. Take an inventory using this MDM Maturity Model and see where you are in your journey to full MDM maturity with all the business benefits that accrue to organizations who have mastered their data for the benefit of all operational applications, business processes, and analytical systems. To learn more, Trevor Naidoo and I have written the Oracle MDM Maturity Model whitepaper. It’s free, so go ahead and download it and use it as you see fit.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >