Search Results

Search found 3807 results on 153 pages for 'django voting'.

Page 147/153 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • Centralized way of organizing urls in Jersey?

    - by drozzy
    Forgive me if I am asking an obvious question (maybe I missed it in the docs somewhere?) but has anyone found a good way to organize their URLs in Jersey Java framework? I mean organizing them centrally in your Java source code, so that you can be sure there are not two classes that refer to the same Url. For example django has a really nice regex-based matching. I was thinking of doing something like an enum: enum Urls{ CARS ("cars"), CAR_INFO ("car", "{info}"); public Urls(String path, String args) ... } but you can imagine that gets out of hand pretty quickly if you have urls like: cars/1/wheels/3 where you need multiple path-ids interleaved with one another... Any tips?

    Read the article

  • Own params to PeriodicTask run() method in Celery

    - by Alex Isayko
    Hello to all! I am writing a small Django application and I should be able to create for each model object its periodical task which will be executed with a certain interval. I'm use for this a Celery application, but i can't understand one thing: class ProcessQueryTask(PeriodicTask): run_every = timedelta(minutes=1) def run(self, query_task_pk, **kwargs): logging.info('Process celery task for QueryTask %d' % query_task_pk) task = QueryTask.objects.get(pk=query_task_pk) task.exec_task() return True Then i'm do following: >>> from tasks.tasks import ProcessQueryTask >>> result1 = ProcessQueryTask.delay(query_task_pk=1) >>> result2 = ProcessQueryTask.delay(query_task_pk=2) First call is success, but other periodical calls returning the error - TypeError: run() takes exactly 2 non-keyword arguments (1 given) in celeryd server. So, can i pass own params to PeriodicTask run() ? Thanks!

    Read the article

  • deleting unaccessed files using python

    - by damon
    My django app parses some files uploaded by the user.It is possible that the file uploaded by the user may remain in the server for a long time ,without it being parsed by the app.This can increase in size if a lot of users upload a lot of files. I need to delete those files not recently parsed by the app -say not accessed for last 24 hours.I tried like this import os import time dirname = MEDIA_ROOT+my_folder filenames = os.listdir(dirname) filenames = [os.path.join(dirname,filename) for filename in filenames] for filename in filenames: last_access = os.stat(filename).st_atime #secs since epoch rtime = time.asctime(time.localtime(last_access)) print filename+'----'+rtime This shows the last accessed times for each file..But I am not sure how I can test if the file access time was within the last 24 hours..Can somebody help me out?

    Read the article

  • Catch all javascript errors and send them to server

    - by ssaboum
    Hi everyone, i wandered if anyone had experience in handling javascript errors globally and send them from the client browser to a server. I think my point is quite clear, i want to know every exceptions, errors, compilation errors, ... that happens on the client side and send them to the server to report them. I'm mainly using mootools and head.js (for the js side) and django for the server side (not that it matters...). Thank you for your help. Regards.

    Read the article

  • Methods of sending web-generated config files to servers and restarting services.

    - by JPG
    Hi, We're writing a web-based tool to configure our services provided by multiple servers. This includes interfaces configuration, dhcp configs etc. etc. Having configs in database and views that generate proper output, how to send it/make it available for servers? I'm thinking about sending it through scp and invoking reload command to services through ssh. I'm also thinking about using Func to do all the job, as this is Python tool and will seemingly integrate with python-based (django) config tool. Any other proposals?

    Read the article

  • Get "2:35pm" instead of "02:35PM" from Python date/time?

    - by anonymous coward
    I'm still a bit slow with Python, so I haven't got this figured out beyond what's obviously in the docs, etc. I've worked with Django a bit, where they've added some datetime formatting options via template tags, but in regular python code how can I get the 12-hour hour without a leading zero? Is there a straightforward way to do this? I'm looking at the 2.5 and 2.6 docs for "strftime()" and there doesn't seem to be a formatting option there for this case. Should I be using something else? Feel free to include any other time-formatting tips that aren't obvious from the docs. =)

    Read the article

  • Python: What does _("str") do?

    - by Rosarch
    I see this in the Django source code: description = _("Comma-separated integers") description = _("Date (without time)") What does it do? I try it in Python 3.1.3 and it fails: >>> foo = _("bar") Traceback (most recent call last): File "<pyshell#0>", line 1, in <module> foo = _("bar") NameError: name '_' is not defined No luck in 2.4.4 either: >>> foo = _("bar") Traceback (most recent call last): File "<pyshell#1>", line 1, in -toplevel- foo = _("bar") NameError: name '_' is not defined What's going on here?

    Read the article

  • Can jquery capture the dynamic dom events and perform actions

    - by Zombie15
    I just wanted to know if something like this is possible. Jquery should fire some action on certian custom event. Like Whenever a new row is added to dom dynamically to table then i have certain action like change the background color to example red. That should work across the whole site. Somethings like Event listeners in Doctrine2 or Signals in Django EDIT: Basically i want some thing like where i can create custom event $.AddnewEvent(newRowAdded); Then i can customise that event with my own functions like $.newRowAdded(function(){ blah blah });

    Read the article

  • Does migrating a site that is 99% several megs of static HTML from Apache to Google App Enging make

    - by JonathanHayward
    I have a large site of mostly static content, and I have entertained migrating to Google App Engine. I am wondering, not so much if it is possible as whether that is cutting a steak with a screwdriver. I see a way to do it in Django that has a bad design smell. Does migrating a literature site that is largely static HTML from Apache to Google App Engine make sense? I'm not specifically asking for a comparison to Nginx or Cherokee; I am interested in whether migrating from a traditional web hosting solution to a more cloudy type of solution recommends itself. The site is JonathansCorner.com, and is presently unavailable ("the magic blue smoke has escaped").

    Read the article

  • Silverlight 5 &ndash; What&rsquo;s New? (Including Screenshots &amp; Code Snippets)

    - by mbcrump
    Silverlight 5 is coming next year (2011) and this blog post will tell you what you need to know before the beta ships. First, let me address people saying that it is dead after PDC 2010. I believe that it’s best to see what the market is doing, not the vendor. Below is a list of companies that are developing Silverlight 4 applications shown during the Silverlight Firestarter. Some of the companies have shipped and some haven’t. It’s just great to see the actual company names that are working on Silverlight instead of “people are developing for Silverlight”. The next thing that I wanted to point out was that HTML5, WPF and Silverlight can co-exist. In case you missed Scott Gutherie’s keynote, they actually had a slide with all three stacked together. This shows Microsoft will be heavily investing in each technology.  Even I, a Silverlight developer, am reading Pro HTML5. Microsoft said that according to the Silverlight Feature Voting site, 21k votes were entered. Microsoft has implemented about 70% of these votes in Silverlight 5. That is an amazing number, and I am crossing my fingers that Microsoft bundles Silverlight with Windows 8. Let’s get started… what’s new in Silverlight 5? I am going to show you some great application and actual code shown during the Firestarter event. Media Hardware Video Decode – Instead of using CPU to decode, we will offload it to GPU. This will allow netbooks, etc to play videos. Trickplay – Variable Speed Playback – Pitch Correction (If you speed up someone talking they won’t sound like a chipmunk). Power Management – Less battery when playing video. Screensavers will no longer kick in if watching a video. If you pause a video then screensaver will kick in. Remote Control Support – This will allow users to control playback functions like Pause, Rewind and Fastforward. IIS Media Services 4 has shipped and now supports Azure. Data Binding Layout Transitions – Just with a few lines of XAML you can create a really rich experience that is not using Storyboards or animations. RelativeSource FindAncestor – Ancestor RelativeSource bindings make it much easier for a DataTemplate to bind to a property on a container control. Custom Markup Extensions – Markup extensions allow code to be run at XAML parse time for both properties and event handlers. This is great for MVVM support. Changing Styles during Runtime By Binding in Style Setters – Changing Styles at runtime used to be a real pain in Silverlight 4, now it’s much easier. Binding in style setters allows bindings to reference other properties. XAML Debugging – Below you can see that we set a breakpoint in XAML. This shows us exactly what is going on with our binding.  WCF & RIA Services WS-Trust Support – Taken from Wikipedia: WS-Trust is a WS-* specification and OASIS standard that provides extensions to WS-Security, specifically dealing with the issuing, renewing, and validating of security tokens, as well as with ways to establish, assess the presence of, and broker trust relationships between participants in a secure message exchange. You can reduce network latency by using a background thread for networking. Supports Azure now.  Text and Printing Improved text clarity that enables better text rendering. Multi-column text flow, Character tracking and leading support, and full OpenType font support.  Includes a new Postscript Vector Printing API that provides control over what you print . Pivot functionality baked into Silverlight 5 SDK. Graphics Immediate mode graphics support that will enable you to use the GPU and 3D graphics supports. Take a look at what was shown in the demos below. 1) 3D view of the Earth – not really a real-world application though. A doctor’s portal. This demo really stood out for me as it shows what we can do with the 3D / GPU support. Out of Browser OOB applications can now create and manage childwindows as shown in the screenshot below.  Trusted OOB applications can use P/Invoke to call Win32 APIs and unmanaged libraries.  Enterprise Group Policy Support allow enterprises to lock down or up the sandbox capabilities of Silverlight 5 applications. In this demo, he tore the “notes” off of the application and it appeared in a new window. See the black arrow below. In this demo, he connected a USB Device which fired off a local Win32 application that provided the data off the USB stick to Silverlight. Another demo of a Silverlight 5 application exporting data right into Excel running inside of browser. Testing They demoed Coded UI, which is available now in the Visual Studio Feature Pack 2. This will allow you to create automated testing without writing any code manually. Performance: Microsoft has worked to improve the Silverlight startup time. Silverlight 5 provides 64-bit browser support.  Silverlight 5 also provides IE9 Hardware acceleration.   I am looking forward to Silverlight 5 and I hope you are too. Thanks for reading and I hope you visit again soon.  Subscribe to my feed CodeProject

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Oracle Enterprise Manager Ops Center 12c : Enterprise Controller High Availability (EC HA)

    - by Anand Akela
    Contributed by Mahesh sharma, Oracle Enterprise Manager Ops Center team In Oracle Enterprise Manager Ops Center 12c we introduced a new feature to make the Enterprise Controllers highly available. With EC HA if the hardware crashes, or if the Enterprise Controller services and/or the remote database stop responding, then the enterprise services are immediately restarted on the other standby Enterprise Controller without administrative intervention. In today's post, I'll briefly describe EC HA, look at some of the prerequisites and then show some screen shots of how the Enterprise Controller is represented in the BUI. In my next post, I'll show you how to install the EC in a HA environment and some of the new commands. What is EC HA? Enterprise Controller High Availability (EC HA) provides an active/standby fail-over solution for two or more Ops Center Enterprise Controllers, all within an Oracle Clusterware framework. This allows EC resources to relocate to a standby if the hardware crashes, or if certain services fail. It is also possible to manually relocate the services if maintenance on the active EC is required. When the EC services are relocated to the standby, EC services are interrupted only for the period it takes for the EC services to stop on the active node and to start back up on a standby node. What are the prerequisites? To install EC in a HA framework an understanding of the prerequisites are required. There are many possibilities on how these prerequisites can be installed and configured - we will not discuss these in this post. However, best practices should be applied when installing and configuring, I would suggest that you get expert help if you are not familiar with them. Lets briefly look at each of these prerequisites in turn: Hardware : Servers are required to host the active and standby node(s). As the nodes will be in a clustered environment, they need to be the same model and configured identically. The nodes should have the same processor class, number of cores, memory, network cards, for example. Operating System : We can use Solaris 10 9/10 or higher, Solaris 11, OEL 5.5 or higher on x86 or Sparc Network : There are a number of requirements for network cards in clusterware, and cables should be networked identically on all the nodes. We must also consider IP allocation for public / private and Virtual IP's (VIP's). Storage : Shared storage will be required for the cluster voting disks, Oracle Cluster Register (OCR) and the EC's libraries. Clusterware : Oracle Clusterware version 11.2.0.3 or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html Remote Database : Oracle RDBMS 11.1.0.x or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html For detailed information on how to install EC HA , please read : http://docs.oracle.com/cd/E27363_01/doc.121/e25140/install_config-shared.htm#OPCSO242 For detailed instructions on installing Oracle Clusterware, please read : http://docs.oracle.com/cd/E11882_01/install.112/e17214/chklist.htm#BHACBGII For detailed instructions on installing the remote Oracle database have a read of: http://www.oracle.com/technetwork/database/enterprise-edition/documentation/index.html The schematic diagram below gives a visual view of how the prerequisites are connected. When a fail-over occurs the Enterprise Controller resources and the VIP are relocated to one of the standby nodes. The standby node then becomes active and all Ops Center services are resumed. Connecting to the Enterprise Controller from your favourite browser. Let's presume we have installed and configured all the prerequisites, and installed Ops Center on the active and standby nodes. We can now connect to the active node from a browser i.e. http://<active_node1>/, this will redirect us to the virtual IP address (VIP). The VIP is the IP address that moves with the Enterprise Controller resource. Once you log on and view the assets, you will see some new symbols, these represent that the nodes are cluster members, with one being an active member and the other a standby member in this case. If you connect to the standby node, the browser will redirect you to a splash page, indicating that you have connected to the standby node. Hope you find this topic interesting. Next time I will post about how to install the Enterprise Controller in the HA frame work. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 1

    - by AjarnMark
    I am fanatical when it comes to managing the source code for my company.  Everything that we build (in source form) gets put into our source control management system.  And I’m not just talking about the UI and middle-tier code written in C# and ASP.NET, but also the back-end database stuff, which at times has been a pain.  We even script out our Scheduled Jobs and keep a copy of those under source control. The UI and middle-tier stuff has long been easy to manage as we mostly use Visual Studio which has integration with source control systems built in.  But the SQL code has been a little harder to deal with.  I have been doing this for many years, well before Microsoft came up with Data Dude, so I had already established a methodology that, while not as smooth as VS, nonetheless let me keep things well controlled, and allowed doing my database development in my tool of choice, Query Analyzer in days gone by, and now SQL Server Management Studio.  It just makes sense to me that if I’m going to do database development, let’s use the database tool set.  (Although, I have to admit I was pretty impressed with the demo of Juneau that Don Box did at the PASS Summit this year.)  So as I was saying, I had developed a methodology that worked well for us (and I’ll probably outline in a future post) but it could use some improvement. When Solutions and Projects were first introduced in SQL Management Studio, I thought we were finally going to get our same experience that we have in Visual Studio.  Well, let’s say I was underwhelmed by Version 1 in SQL 2005, and apparently so were enough other people that by the time SQL 2008 came out, Microsoft decided that Solutions and Projects would be deprecated and completely removed from a future version.  So much for that idea. Then I came across SQL Source Control from Red-Gate.  I have used several tools from Red-Gate in the past, including my favorites SQL Compare, SQL Prompt, and SQL Refactor.  SQL Prompt is worth its weight in gold, and the others are great, too.  Earlier this year, we upgraded from our earlier product bundles to the new Developer Bundle, and in the process added SQL Source Control to our collection.  I thought this might really be the golden ticket I was looking for.  But my hopes were quickly dashed when I discovered that it only integrated with Microsoft Team Foundation Server and Subversion as the source code repositories.  We have been using SourceGear’s Vault and Fortress products for years, and I wholeheartedly endorse them.  So I was out of luck for the time being, although there were a number of people voting for Vault/Fortress support on their feedback forum (as did I) so I had hope that maybe next year I could look at it again. But just a couple of weeks ago, I was pleasantly surprised to receive notice in my email that Red-Gate had an Early Access version of SQL Source Control that worked with Vault and Fortress, so I quickly downloaded it and have been putting it through its paces.  So far, I really like what I see, and I have been quite impressed with Red-Gate’s responsiveness when I have contacted them with any issues or concerns that I have had.  I have had several communications with Gyorgy Pocsi at Red-Gate and he has been immensely helpful and responsive. I must say that development with SQL Source Control is very different from what I have been used to.  This post is getting long enough, so I’ll save some of the details for a separate write-up, but the short story is that in my regular mode, it’s all about the script files.  Script files are King and you dare not make a change to the database other than by way of a script file, or you are in deep trouble.  With SQL Source Control, you make your changes to your development database however you like.  I still prefer writing most of my changes in T-SQL, but you can also use any of the GUI functionality of SSMS to make your changes, and SQL Source Control “manages” the script for you.  Basically, when you first link your database to source control, the tool generates scripts for every primary object (tables and their indexes are together in one script, not broken out into separate scripts like DB Projects do) and those scripts are checked into your source control.  So, if you needed to, you could still do a GET from your source control repository and build the database from scratch.  But for the day-to-day work, SQL Source Control uses the same technique as SQL Compare to determine what changes have been made to your development database and how to represent those in your repository scripts.  I think that once I retrain myself to just work in the database and quit worrying about having to find and open the right script file, that this will actually make us more efficient. And for deployment purposes, SQL Source Control integrates with the full SQL Compare utility to produce a synchronization script (or do a live sync).  This is similar in concept to Microsoft’s DACPAC, if you’re familiar with that. If you are not currently keeping your database development efforts under source control, definitely examine this tool.  If you already have a methodology that is working for you, then I still think this is worth a review and comparison to your current approach.  You may find it more efficient.  But remember that the version which integrates with Vault/Fortress is still in pre-release mode, so treat it with a little caution.  I have found it to be fairly stable, but there was one bug that I found which had inconvenient side-effects and could have really been frustrating if I had been running this on my normal active development machine.  However, I can verify that that bug has been fixed in a more recent build version (did I mention Red-Gate’s responsiveness?).

    Read the article

  • What to "CRM" in San Francisco? CRM Highlights for OpenWorld '12

    - by Tony Berk
    There is plenty to SEE for CRM during OpenWorld in San Francisco, September 30 - October 4! As I mentioned in my earlier post about some of the keynote sessions, Is There a Cloud Over OpenWorld?, I'm going try to highlight some key sessions to help you find the best sessions for you. Interested to find out where Oracle CRM products are headed, then find your "roadmap" session. Here are some of the sessions in the CRM Track that you might want to consider attending for products you currently own or might consider for the future. I think you'll agree, there is quite a bit of investment going on across Oracle CRM. Please use OpenWorld Schedule Builder or check the OpenWorld Content Catalog for all of the session details and any time or location changes. Tip: Pre-enrolled session registrants via Schedule Builder are allowed into the session rooms before anyone else, so Schedule Builder will guarantee you a seat. Many of the sessions below will likely be at capacity. General Session: Oracle Fusion CRM—Improving Sales Effectiveness, Efficiency, and Ease of Use (Session ID: GEN9674) - Oct 2, 11:45 AM - 12:45 PM. Anthony Lye, Senior VP, Oracle leads this general session focused on Oracle Fusion CRM. Oracle Fusion CRM optimizes territories, combines quota management and incentive compensation, integrates sales and marketing, and cleanses and enriches data—all within a single application platform. Oracle Fusion can be configured, changed, and extended at runtime by end users, business managers, IT, and developers. Oracle Fusion CRM can be used from the Web, from a smartphone, from Microsoft Outlook, or from an iPad. Deloitte, sponsor of the CRM Track, will also present key concepts on CRM implementations. Oracle Fusion Customer Relationship Management: Overview/Strategy/Customer Experiences/Roadmap (CON9407) - Oct 1, 3:15PM - 4:15PM. In this session, learn how Oracle Fusion CRM enables companies to create better sales plans, generate more quality leads, and achieve higher win rates and find out why customers are adopting Oracle Fusion CRM. Gain a deeper understanding of the unique capabilities only Oracle Fusion CRM provides, and learn how Oracle’s commitment to CRM innovation is driving a wide range of future enhancements. Oracle RightNow CX Cloud Service Vision and Roadmap (CON9764) - Oct 1, 10:45 AM - 11:45 AM. Oracle RightNow CX Cloud Service combines Web, social, and contact center experiences for a unified, cross-channel service solution in the cloud, enabling organizations to increase sales and adoption, build trust, strengthen relationships, and reduce costs and effort. Come to this session to hear from Oracle experts about where the product is going and how Oracle is committed to accelerating the pace of innovation and value to its customers. Siebel CRM Overview, Strategy, and Roadmap (CON9700) - Oct 1, 12:15PM - 1:15PM. The world’s most complete CRM solution, Oracle’s Siebel CRM helps organizations differentiate their businesses. Come to this session to learn about the Siebel product roadmap and how Oracle is committed to accelerating the pace of innovation and value for its customers on this platform. Additionally, the session covers how Siebel customers can leverage many Oracle assets such as Oracle WebCenter Sites; InQuira, RightNow, and ATG/Endeca applications, and Oracle Policy Automation in conjunction with their current Siebel investments. Oracle Fusion Social CRM Strategy and Roadmap: Future of Collaboration and Social Engagement (CON9750) - Oct 4, 11:15 AM - 12:15 PM. Social is changing the customer experience! Come find out how Oracle can help you know your customers better, encourage brand affinity, and improve collaboration within your ecosystem. This session reviews Oracle’s social media solution and shows how you can discover hidden insights buried in your enterprise and social data. Also learn how Oracle Social Network revolutionizes how enterprise users work, collaborate, and share to achieve successful outcomes. Oracle CRM On Demand Strategy and Roadmap (CON9727) - Oct 1, 10:45AM - 11:45AM. Oracle CRM On Demand is a powerful cloud-based customer relationship management solution. Come to this session to learn directly from Oracle experts about future product plans and hear how Oracle is committed to accelerating the pace of innovation and value to its customers. Knowledge Management Roadmap and Strategy (CON9776) - Oct 1, 12:15PM - 1:15PM. Learn how to harness the knowledge created as a natural byproduct of day-to-day interactions to lower costs and improve customer experience by delivering the right answer at the right time across channels. This session includes an overview of Oracle’s product roadmap and vision for knowledge management for both the Oracle RightNow and Oracle Knowledge (formerly InQuira) product families. Oracle Policy Automation Roadmap: Supercharging the Customer Experience (CON9655) - Oct 1, 12:15PM - 1:15PM. Oracle Policy Automation delivers rapid customer value by streamlining the capture, analysis, and deployment of policies across every facet of the customer experience. This session discusses recent Oracle Policy Automation enhancements for policy analytics; the latest Oracle Policy Automation Connector for Siebel; and planned new capabilities, including availability with the Oracle RightNow product line. There is much more, so stay tuned for more highlights or check out the Content Catalog and search for your areas of interest. Which session are you most interested in? Make your suggestions! But no voting for Pearl Jam or Kings of Leon. Those are after hours! 

    Read the article

  • GI ????

    - by Allen Gao
    Normal 0 7.8 ? 0 2 false false false MicrosoftInternetExplorer4 classid="clsid:38481807-CA0E-42D2-BF39-B33AF135CC4D" id=ieooui st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} ??????????11gR2 GI ?????????,??????GI????????????????????? ????????GI???????3???,ohasd??,??????,??????? ??,ohasd??? 1. /etc/inittab?????? h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null ???,??????? root 4865 1 0 Dec02 ? 00:01:01 /bin/sh /etc/init.d/init.ohasd run ??????????????,???? +init.ohasd ????????? + os????????? + ??S* ohasd????, ??S96ohasd + GI????????(crsctl enable crs) ??,ohasd.bin ??????,????OLR????,??,??ohasd.bin??????,?????OLR??????????????OLR???$GRID_HOME/cdata/${HOSTNAME}.olr 2. ohasd.bin????????agents(orarootagent, oraagent, cssdagnet ? cssdmonitor) ???????????????,?????agent??????,??????????$GRID_HOME/bin ???????????,??,?????????,??corruption. ???,??????? 1. Mdnsd ??????(Multicast)???????????????????,??????????????????????????? 2. Gpnpd ????,??????????bootstrap ??,??????????????gpnp profile???,?????mdnsd??????,???????????,?????????????,??gpnp profile (<gi_home>/gpnp/profiles/peer/profile.xml)?????????? 3. Gipcd ????,????????????????(cluster interconnect)?????,???????gpnpd???,??,??????????,?????gpnpd ??????? 4. Ocssd.bin ?????????????gpnp profile?????????(Voting Disk),????gpnpd ??????????,?????????????,??ocssd.bin ??????,?????????? + gpnp profile ?????????? + gpnpd ??????? + ??????asm disk ??????????? + ??????????? 5. ??????????:ora.ctssd, ora.asm, ora.cluster_interconnect.haip, ora.crf, ora.crsd ?? ??:????????????????ocssd.bin, gpnpd.bin ? gipcd.bin ????,??gpnpd.bin????,ocssd.bin ? gipcd.bin ?????????,?gpnpd.bin????????,ocssd.bin ? gipcd.bin ????????gpnp profile?????????? ??,????????????,?????crsd????????? 1. Crsd?????????????OCR,????OCR????ASM?,???? ASM??????,??OCR???ASM??????????OCR???????,???????????????? 2. Crsd ?????agents(orarootagent, oraagent_<rdbms_owner>, oraagent_<gi_owner> )???agent????,??????????$GRID_HOME/bin ???????????,??,?????????,??corruption. 3. ????????  ora.net1.network : ????,?????????????,scanvip, vip, listener?????????????,??????????,vip, scanvip ?listener ??offline,?????????????? ora.<scan_name>.vip:scan???vip??,?????3?? ora.<node_name>.vip : ?????vip ?? ora.<listener_name>.lsnr: ???????????????,?11gR2??,listener.ora???????,????????? ora.LISTENER_SCAN<n>.lsnr: scan ????? ora.<????>.dg: ASM ????????????????mount???,dismount???? ora.<????>.db: ???????11gR2????????????,??????????rac ????????,??????????,???????“USR_ORA_INST_NAME@SERVERNAME(<node name> )”???????,??????????ASM???,???????????????????,??dependency?????????,??????????????????,???dependancy???????,??????(crsctl modify res ……)? ora.<???>.svc:?????????11gR2 ??,?????????,???10gR2??,???????????,srv ?cs ????? ora.cvu :?????11.2.0.2???,???????cluvfy??,???????????????? ora.ons : ONS??,????????,????? ??,?????GI??????????????????? $GRID_HOME/log/<node_name>/ocssd <== ocssd.bin ?? $GRID_HOME/log/<node_name>/gpnpd <== gpnpd.bin ?? $GRID_HOME/log/<node_name>/gipcd <== gipcd.bin ?? $GRID_HOME/log/<node_name>/agent/crsd <== crsd.bin ?? $GRID_HOME/log/<node_name>/agent/ohasd <== ohasd.bin ?? $GRID_HOME/log/<node_name>/mdnsd <== mdnsd.bin ?? $GRID_HOME/log/<node_name>/client <== ????GI ??(ocrdump, crsctl, ocrcheck, gpnptool??)??????????? $GRID_HOME/log/<node_name>/ctssd <== ctssd.bin ?? $GRID_HOME/log/<node_name>/crsd <== crsd.bin ?? $GRID_HOME/log/<node_name>/cvu <== cluvfy ????????? $GRID_HOME/bin/diagcollection.sh <== ????????????????? ??,????????(/var/tmp/.oracle ? /tmp/.oracle),??????????????????ipc???,??,?????????????????????,???GI?????????????????????,??????????GI??????????????

    Read the article

  • Is Python Interpreted or Compiled?

    - by crodjer
    This is just a wondering I had while reading about interpreted and compiled languages. Ruby is no doubt an interpreted language, since source code is compiled by an interpreter at the point of execution. On the contrary C is a compiled language, as one have to compile the source code first according to the machine and then execute. This results is much faster execution. Now coming to Python: A python code (somefile.py) when imported creates a file (somefile.pyc) in the same directory. Let us say the import is done in a python shell or django module. After the import I change the code a bit and execute the imported functions again to find that it is still running the old code. This suggests that *.pyc files are compiled python files similar to executable created after compilation of a C file, though I can't execute *.pyc file directly. When the python file (somefile.py) is executed directly ( ./somefile.py or python somefile.py ) no .pyc file is created and the code is executed as is indicating interpreted behavior. These suggest that a python code is compiled every time it is imported in a new process to crate a .pyc while it is interpreted when directly executed. So which type of language should I consider it as? Interpreted or Compiled? And how does its efficiency compare to interpreted and compiled languages? According to wiki's Interpreted Languages page it is listed as a language compiled to Virtual Machine Code, what is meant by that? Update Looking at the answers it seems that there cannot be a perfect answer to my questions. Languages are not only interpreted or only compiled, but there is a spectrum of possibilities between interpreting and compiling. From the answers by aufather, mipadi, Lenny222, ykombinator, comments and wiki I found out that in python's major implementations it is compiled to bytecode, which is a highly compressed and optimized representation and is machine code for a virtual machine, which is implemented not in hardware, but in the bytecode interpreter. Also the the languages are not interpreted or compiled, but rather language implementations either interpret or compile code. I also found out about Just in time compilation As far as execution speed is concerned the various benchmarks cannot be perfect and depend on context and the task which is being performed. Please tell if I am wrong in my interpretations.

    Read the article

  • Just another web startup - platform comparison

    - by Holland
    I'm looking to do a web startup which involves something along the lines of an ecommerce site, yet a little more in depth than that. While it's something that I would rather not go into detail with in terms of the initial idea, I can specify (on a basic level) what would be required of the website. If you have any observations or opinions derived from personal experience, which relate to what you see here, I'd appreciate it if you could share these. Paypal's API interaction (definitely). From what I've read about their API, integration with it into their website is VERY expensive, so I'd probably hold off on that until I've (hopefully) generated money and write my own simple credit-card interaction system. SQL Backend (obviously) PostgreSQL seems like a pretty good choice, as from what I've read, it's structure is a bit more "object-oriented" than, say, MySQL. Then again, I've used MySQL before and haven't had much problem with it whatsoever. Would it be worth learning PostgreSQL for this purpose? Java or .Net implementation (Preferably Mono, so I can use .Net while hosting the website using Apache). The reason for this is because, frankly, while I know PHP is a great platform to develop websites with, I hate developing with it. Before someone chimes in and flames me for saying that, note that I have nothing against the language, I just don't like it for my purposes. While Mono may be good to go with, I'm aware that ASP.Net MVC 3 hasn't been released for Mono yet, which may be a pain to work with, without their Razor syntax. Ontop of that, it seems Java is completely FULL of class libraries which deal with web development, that can be downloaded from the web. If anyone has any experience with these, I'd appreciate if that were posted. From what I've read about Spring and Struts2, they seem to be the best out there - especially since they're (AFAIK) MVC. I've considered Python and Django, which do seem REALLY nice, but I don't know much Python, and I'd rather start with something that I already know (language-wise; not framework-wise) than dive into learning a language AND a new framework. I'd REALLY like to be able to host my website via Apache, rather than using Windows Server or anything like that, as, frankly, I hate their setup. I'm not dissing it in any way, shape, or form, I'm just saying I dislike it. <3 terminal config. If there is a good reason to with Windows Server, however, I'd be willing to learn it. C# has a lot of things that Java appears not to have, including Delegates, unsigned types, and LINQ. Is there anything that Java has which can counter these?

    Read the article

  • Career Advice: finding challenging work in software and web development

    - by dianovich
    Having left my physics degree early, I started out in the realm of web design / front end web development and was able to get work quite quickly. I moved on to spend a chunk of my time on servers and gained experience with frameworks like Wordpress and Drupal, then the likes of Codeigniter and CakePHP and became comfortable in Debian-based and RHEL/CentOS environments. I ventured in to iOS development and published a couple of native apps to the app store too! I have started to spend a good deal of my time writing Python and have invested a little time in Django. The problem is, I still spend a fair chunk of my time doing more front end web development (writing markup and CSS for site themes, design-lead JavaScript, small applications for which application architecture and software engineering are relatively unimportant or too time consuming to invest in) in my job. What I want to do is really exercise the systematic/logical portion of my brain and tackle challenging problems on a daily basis. I want to have to care about big-oh running times, modularity in software, DRY, performance tuning and development methodologies. I want to work for a firm whose clients say: "Yes, these things are important to us and we'll pay you to get them right." But it is difficult: I have no formal training and am potentially becoming a jack of all trades. Not that being a jack of many trades is necessarily a bad thing, but the scope of work I find myself involved in is far too broad. And, there are only so many hours in a day outside of work! My question is: where do I go from here? I am starting to work on a few open source projects and have started to publish content to my blog. But this isn't likely to make it past the recruitment consultants and HR departments of many-a-firm. And I do not, for example, work in a team that practices agile methodologies, so how do I get work in such a team to gain experience? While I have been responsible for implementing version control and some solid working practices into our current environment, there is only so far I can go in this context. What would convince you that i'm worth taking a risk? What would convince you that i'll have caught up the other guys in your employ in next to no time?

    Read the article

  • apt-get upgrade stuck at the same package (openjdk-6-jre-headless)

    - by decibyte
    I'm stuck, can't upgrade my system. Running sudo apt-get upgrade gives me the following: mmm@alalunga:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: ginn libgrip0 linux-generic-pae linux-headers-generic-pae linux-image-generic-pae The following packages will be upgraded: apport apport-gtk bind9-host build-essential dhcp3-client dhcp3-common dnsutils eog evince evince-common firefox firefox-branding firefox-dbg firefox-globalmenu firefox-gnome-support firefox-locale-en gimp gimp-data gir1.2-totem-1.0 glib-networking glib-networking-common glib-networking-services gnupg gpgv icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-6-plugin icedtea-netx icedtea-netx-common icedtea-plugin isc-dhcp-client isc-dhcp-common libapache2-mod-php5 libart-2.0-2 libbind9-80 libdns81 libevince3-3 libgimp2.0 libisc83 libisccc80 libisccfg82 liblwres80 libssl-dev libssl-doc libssl1.0.0 libtotem0 linux-firmware linux-libc-dev openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openssl php-pear php5-cli php5-common php5-curl php5-dev php5-gd php5-mysql php5-xsl policykit-1-gnome python-apport python-django python-gst0.10 python-problem-report resolvconf thunderbird thunderbird-globalmenu thunderbird-gnome-support totem totem-common totem-mozilla totem-plugins xserver-xorg-input-synaptics 74 upgraded, 0 newly installed, 0 to remove and 5 not upgraded. Need to get 317 MB/327 MB of archives. After this operation, 1.481 kB of additional disk space will be used. Do you want to continue [Y/n]? Get:1 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:2 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:3 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:4 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:5 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:6 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] Get:7 http://archive.ubuntu.com/ubuntu/ precise-updates/main openjdk-6-jre-headless i386 6b24-1.11.4-1ubuntu0.12.04.1 [27,3 MB] 9% [7 openjdk-6-jre-headless 27,3 MB/27,3 MB 100%] It keeps downloading the package openjdk-6-jre-headless, then does nothing for a while (hanging on what's the last line above), then download the package again. It's at its 13th download attempt at the moment of writing. The actual downloads seem to be done just fine, but whatever it does after downloading seems to be failing. I tried removing openjdk-6, but then it wanted to install openjdk-7 instead, with the same result, hanging at openjdk-7-jre-headless instead. I also tried changing servers from my local (Danish) to the main server. No luck. It's also keeping me from upgrading alle the other packages. What to do?

    Read the article

  • Client-Server connection response timeout issues

    - by Srikar
    User creates a folder in client and in the client-side code I hit an API to the server to make this persistent for that user. But in some cases, my server is so busy that the request timesout. The server has executed my request but timedout before sending a response back to client. The timeout set is 10 seconds in client. At this point the client thinks that server has not executed its request (of creating a folder) and ends up sending it again. Now I have 2 folders on the server but the user has created only 1 folder in the client. How to prevent this? One of the ways to solve this is to use a unique ID with each new request. So the ID acts as a distinguisher between old and new requests from client. But this leads to storing these IDs on my server and do a lookup for each API call which I want to avoid. Other way is to increase the timeout duration. But I dont want to change this from 10 seconds. Something tells me that there are better solutions. I have posted this question in stackoverflow but I think its better suited here. UPDATE: I will make my problem even more explicit. The client is a webbrowser and the server is running nginx+django+mysql (standard stack). The user creates a folder in webbrowser. As a result I need to hit a server API. The API call responds back, thereby client knows API call was success. This is normal scenario. Sometimes though, server successfully completes the API request but the client-side (webbrowser) connection timesout before server can respond back. The client has no clue at this point. The user thinks the request was a fail & clicks again. This time it was a success but when the UI refreshes he sees 2 folders. I want to remedy this situation.

    Read the article

  • What do you expect from a package manager for Emacs

    - by tarsius
    Although several hundred Emacs Lisp libraries exist GNU Emacs does not have an (internal) package manager. I guess that most users would agree that it is currently rather inconvenient to find, install and especially keep up-to-date Emacs Lisp libraries. These pages make life a bit easier Emacs Lisp List - Problem: I see dead people (links). Emacswiki - Problem: May contain traces of nuts (malicious code). These are some package managers XEmacs package manager package.el - ELPA pases install.el install-elisp.el plugin.el use-package.el jem-pkg.el epkg/elm - the one I am working. And this are some packages that provide functionality that might be useful in a package manager ell.el - Browse the Emacs Lisp List genauto.el - helps generate autoloads for your elisp packages date-calc.el - date calculation and parsing routines strptime.el - partial implementation of POSIX date and time parsing wikirel.el - Visit relevant pages on the Emacs Wiki loadhist.el, lib-requires.el, elisp-depend.el - Commands to list Emacs Lisp library dependencies. project-root.el - Define a project root and take actions based upon it So I would like to know from you what you consider important/unimportant/supplementary... in a package manager for Emacs. Some ideas Many packages (incorporate the Emacs Lisp List and other lists of libraries). Only packages that have been tested. Support for more than one package archive (so people can choose between many/tested packages). Dependency calculated based on required features only. Dependencies take particular versions into account. Only use versions that have been released upstream. Use versions from version control systems if available. Packages are categorized. Packages can be uninstalled and updated not only installed. Support creating fork of upstream version of packages. Support publishing these forks. Support choosing a fork. After installation packages are activated. Generate autoloads. Integration with Emacswiki (see wikirel.el). Users can tag, comment ... packages and share that information. Only FSF-assigned/GPL/FOSS software or don't care about license. Package manager should be integrated in Emacs. Support contacting author. Lots of metadata. Suggest alternatives before installing a particular package. Some discussions about the subject at hand emacs-devel 20080801 comp.emacs 20021121 RationalElispPackaging I am hoping for these kinds of answers Pointers to more implementations, discussions etc. Lengthy descriptions of a set of features that make up your ideal package manager. Descriptions of one particular disired/undisired feature. This has the advantage that the regular voting mechanism allows us to see what features are most welcomed. Feel free to elaborate on my ideas from above. Surprise me.

    Read the article

  • Oracle Coding Standards Feature Implementation

    - by Mike Hofer
    Okay, I have reached a sort of an impasse. In my open source project, a .NET-based Oracle database browser, I've implemented a bunch of refactoring tools. So far, so good. The one feature I was really hoping to implement was a big "Global Reformat" that would make the code (scripts, functions, procedures, packages, views, etc.) standards compliant. (I've always been saddened by the lack of decent SQL refactoring tools, and wanted to do something about it.) Unfortunatey, I am discovering, much to my chagrin, that there doesn't seem to be any one widely-used or even "generally accepted" standard for PL-SQL. That kind of puts a crimp on my implementation plans. My search has been fairly exhaustive. I've found lots of conflicting documents, threads and articles and the opinions are fairly diverse. (Comma placement, of all things, seems to generate quite a bit of debate.) So I'm faced with a couple of options: Add a feature that lets the user customize the standard and then reformat the code according to that standard. —OR— Add a feature that lets the user customize the standard and simply generate a violations list like StyleCop does, leaving the SQL untouched. In my mind, the first option saves the end-users a lot of work, but runs the risk of modifying SQL in potentially unwanted ways. The second option runs the risk of generating lots of warnings and doing no work whatsoever. (It'd just be generally annoying.) In either scenario, I still have no standard to go by. What I'd need to know from you guys is kind of poll-ish, but kind of not. If you were going to use a tool of this nature, what parts of your SQL code would you want it to warn you about or fix? Again, I'm just at a loss due to a lack of a cohesive standard. And given that there isn't anything out there that's officially published by Oracle, I think this is something the community could weigh in on. Also, given the way that voting works on SO, the votes would help to establish the popularity of a given "refactoring." P.S. The engine parses SQL into an expression tree so it can robustly analyze the SQL and reformat it. There should be quite a bit that we can do to correct the format of the SQL. But I am thinking that for the first release of the thing, layout is the primary concern. Though it is worth noting that the thing already has refactorings for converting keywords to upper case, and identifiers to lower case.

    Read the article

  • Alternatives to requiring users to register for an account?

    - by jamieb
    I'm working on a side project to build a new web app idea of mine. For the sake of discussion, let's say this app displays a random photograph of a famous work of art. On a scale of 1 to 5, users are asked to rate how well they like each piece of art, and then are shown the next photo. Eventually, the app is able to get an sense of the person's style and is able to recommend artwork that he/she may find pleasing. The whole concept is similar to Netflix. I understand how to do all the preference matching logic (although not as sophisticated as Netflix). But I'd like to find a way to do this without requiring that users create an account first. This is a novelty website that a typical user might use only a handful of times. Requiring registration is overkill and will likely drastically reduce it's utility. I'd like to allow people to begin rating artwork within five seconds of their initial pageview, yet maintain the integrity of the voting (since recommendations are predicated on how other people have rated the various pieces of artwork). Can it be done? Some ideas: OpenID. The perfect solution except for the fact that it's not wildly used and my target audience isn't the most technically adept demographic. Text message. User inputs phone number and is texted a four digit code to key into the web app. Quick, easy, and great way to limit abuse. However, privacy concerns abound... people are probably even less likely to give me their phone number than their email address. Facebook login. I personally don't have a Facebook account due to privacy concerns. And I'd really hate to support such a proprietary platform. Hash code/Bookmark. Vistor's initial pageview generates a 5 or 6 digit alphanumeric code that is embedded in each subsequent URL. They can bookmark any page to save their state. Good: Very simple system that doesn't require any user action. Bad: Very easy to stuff the ballot box, might be difficult to account for users sharing the link containing their ID code via email or social networking sites.

    Read the article

  • Detecting abuse for post rating system

    - by Steven smethurst
    I am using a wordpress plugin called "GD Star Rating" to allow my users to vote on stories that I post to one of my websites. http://everydayfiction.com/ Recently we have been having a lot of abuse of the system. Stories that have obviously been voted up artificially. "GD Star Rating" creates some detailed logs when a user votes on a story. Including; IP, Time of vote, and user_adgent, ect.. For example this story has 181 votes with an average of 5.7 http://www.everydayfiction.com/snowman-by-shaun-simon/ Most other stories only get around ~40 votes each day. At first I thought that the story got on to a social bookmarking site Digg, Stumbleupon ect... but after checking the logs I found that this story is getting the same amount of traffic that a normal story gets ~2k-3k. I checked if all the votes for this perpendicular story where coming from a the same IP address. I could see this happening if a user was at a school's computer lab using all their lab computers to vote up this story. Not one duplicate IP address in the log for this story. SELECT ip, COUNT(*) as count FROM wp_gdsr_votes_log WHERE id=3932 GROUP BY (ip ) ORDER BY count DESC Next I thought that a use might be using a proxy to vote up a story. I checked this by grouping all the browser user_agent together to see if there a single browser voting in a perpendicular way. At most 7 users where using a similar browser but voted sporadically (1-5), no evidence of wrong doing. SELECT user_agent, COUNT(*) as count FROM wp_gdsr_votes_log WHERE id=3932 GROUP BY ( user_agent) ORDER BY count DESC I check was to see if all the votes came in at a once. Maybe someone has a really interesting bot that can change the user_adgent and uses proxies, ect... At most 5 votes came with in 2 mins of each other. It doesn't seem to be any regularity on how people vote (IE a 5 vote does not come in once a min) SELECT * FROM wp_gdsr_votes_log WHERE id =3932 AND vote=5 ORDER BY wp_gdsr_votes_log.voted DESC The obvious solution to this problem is to force people to login before they are allowed to vote. But I would prefer to not have to go down that route unless it is absolutely necessary. I'm looking for suggestions on things to test for to detect the abuse.

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153  | Next Page >