Search Results

Search found 14131 results on 566 pages for 'note'.

Page 184/566 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • Algorithm for a lucky game [on hold]

    - by Ronnie
    Assume we have the following Keno(lottery type) game: From 80 numbers(from 1 to 80), 20 are being drawn. The players choose 1 or 2 or 3..... or 12 numbers to play(12 categories). If they choose for example 4 then they win if they predict correctly a certain amount of numbers(2,3 or 4) from the 4 they have played and lose if the predict only 1 or 0 numbers. They win X times their money accordingly to some predefined factor depending on how many numbers they predict from each category. The same with the other categories. And e.g 11 out of 11 gives 250000 times your money and 12 out of 12 gives 1000000 your money. So the company would want to avoid winnings so high. Every draw by the company is being made every 5 minutes and in each draw around 120000 (let's say) different predictions(Keno tickets) are being played. Let's assume 12000 are being played in category 10 and 12000 in category 11 and also 12000 in category 12. I'm wondering if there is an algorithm to allow the company that provides the game in the 5 minutes between the drawings, to find a 20 number set, in order to avoid any "12 out of 12" and "11 out of 11" and "11 out of 12" and "10 out of 11" and "10 out of 10" winning ticket. That means is there any algorithm, where in a time of less than 1 minute approximately(in todays hardware), to be able to find a 20 number set so that none of the 12000 12 and 11 and 10 number sets that the players played(in categories 10,11 and 12) contains any winning of "12 out of 12" and "11 out of 11" and "11 out of 12" and "10 out of 11" and "10 out of 10"? Or even better the generalization of the problem: What is the best algorithm(from a perspective of minimal time), to be able to find a Y number set from numbers 1 to Z(e.g Y=20, Z=80) so that none of the X sets of K-numbers that are being played(in category K) contains more than K-m numbers from the Y-set? (Note that for Y=K and m=1 there is a practical algorithm.)

    Read the article

  • Innodb : cannot allocate the memory for the buffer pool

    - by mingyeow
    My innodb keeps crashing. This is the error message below. Does anyone know why this keeps happening? InnoDB: by InnoDB 49201616 bytes. Operating system errno: 12 InnoDB: Check if you should increase the swap file or InnoDB: ulimits of your operating system. InnoDB: On FreeBSD check you have compiled the OS with InnoDB: a big enough maximum process size. InnoDB: Note that in most 32-bit computers the process InnoDB: memory space is limited to 2 GB or 4 GB. InnoDB: We keep retrying the allocation for 60 seconds... 0 processes alive and '/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf ping' resulted in /usr/bin/mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)' Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists! InnoDB: Fatal error: cannot allocate the memory for the buffer pool [ERROR] Default storage engine (InnoDB) is not available

    Read the article

  • Multithreading in Windows Phone 7 emulator: A bug

    - by Laurent Bugnion
    Multithreading is supported in Windows Phone 7 Silverlight applications, however the emulator has a bug (which I discovered and was confirmed to me by the dev lead of the emulator team): If you attempt to start a background thread in the MainPage constructor, the thread never starts. The reason is a problem with the emulator UI thread which doesn’t leave any time to the background thread to start. Thankfully there is a workaround (see code below). Also, the bug should be corrected in a future release, so it’s not a big deal, even though it is really confusing when you try to understand why the *%&^$£% thread is not &$%&%$£ starting (that was me in the plane the other day ;) This code does not work: public partial class MainPage : PhoneApplicationPage { public MainPage() { InitializeComponent(); SupportedOrientations = SupportedPageOrientation.Portrait | SupportedPageOrientation.Landscape; var counter = 0; ThreadPool.QueueUserWorkItem(o => { while (true) { Dispatcher.BeginInvoke(() => { textBlockListTitle.Text = (counter++).ToString(); }); } }); } } This code does work: public MainPage() { InitializeComponent(); SupportedOrientations = SupportedPageOrientation.Portrait | SupportedPageOrientation.Landscape; var counter = 0; ThreadPool.QueueUserWorkItem(o => { while (true) { Dispatcher.BeginInvoke(() => { textBlockListTitle.Text = (counter++).ToString(); }); // NOTICE THIS LINE!!! Thread.Sleep(0); } }); } Note that even if the thread is started in a later event (for example Click of a Button), the behavior without the Thread.Sleep(0) is not good in the emulator. As of now, i would recommend always sleeping when starting a new thread. Happy coding: Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • How can I use the shell to make my mp3s a Shoutcast source?

    - by ChasonDehsotel
    I'm looking to stream a directory of mp3s from my audio source (Debian server) to my Shoutcast server. The idea is to have an archive playing in the instance that someone isn't broadcasting live. I'm not sure how to continue. I started with extensive Google-ing, and was unable to come up with a solution. Evan Carroll suggested I try here. I appreciate any insight y'all may have. __ On a side note, "users with less than 100 reputation can't create new tags. The tags 'shoutcast-source shoutcast broadcasting' are new. Try using existing tags instead." -- Who can create these?

    Read the article

  • Is it a "pattern smell" to put getters like "FullName" or "FormattedPhoneNumber" in your model?

    - by DanM
    I'm working on an ASP.NET MVC app, and I've been getting into the habit of putting what seem like helpful and convenient getters into my model/entity classes. For example: public class Member { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public string PhoneNumber { get; set; } public string FullName { get { return FirstName + " " + LastName; } } public string FormattedPhoneNumber { get { return "(" + PhoneNumber.Substring(0, 3) + ") " + PhoneNumber.Substring(3, 3) + "-" + PhoneNumber.Substring(6); } } } I'm wondering people think about the FullName and FormattedPhoneNumber getters. They make it very easy to create standardized data formats throughout the app, and they seem to save a lot of repeated code, but it could definitely be argued that data format is something that should be handled in mapping from model to view-model. In fact, I was originally applying these data formats in my service layer where I do my mapping, but it was becoming a burden to constantly have to write formatters then apply them in many different places. E.g., I use "Full Name" in most views, and having to type something like model.FullName = MappingUtilities.GetFullName(entity.FirstName, entity.LastName); all over the place seemed a lot less elegant than just typing model.FullName = entity.FullName (or, if you use something like AutoMapper, potentially not typing anything at all). So, where do you draw the line when it comes to data formatting. Is it "okay" to do data formatting in your model or is that a "pattern smell"? Note: I definitely do not have any html in my model. I use html helpers for that. I'm strictly talking about formatting or combining data (and especially data that is frequently used).

    Read the article

  • Embedding BIP just got easier!

    - by Tim Dexter
    To make up for yesterday's documentation faux pas, when I referenced Kan's blog as a source for the debug feature over Leslie's fine official documentation that had come out some time before Kan's blog. Apologies again Leslie! I noticed another new feature that I was unaware of in the New Features guide for 10.1.3.4.1. The ability to remove the BI Publisher header from the user interface so you can get this: instead of this. Useful? If you wanted to host BIP inside another application such as a portal or custom app, you can make the BIP UI app look much more integrated. By default you still get our 'blue' look and feel' but I have documented else where how you can change that. For instance, now you can have BIP hosted inside a BIEE instance and provide access to all the reports a user might wish to run with very little effort rather than picking and choosing what to bubble up to the dashboard. How do you do it? Get on over to the New Features guide and find out, there are some other goodies there too. Note to self, RTFM!

    Read the article

  • Project Euler 14: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 14.  As always, any feedback is welcome. # Euler 14 # http://projecteuler.net/index.php?section=problems&id=14 # The following iterative sequence is defined for the set # of positive integers: # n -> n/2 (n is even) # n -> 3n + 1 (n is odd) # Using the rule above and starting with 13, we generate # the following sequence: # 13 40 20 10 5 16 8 4 2 1 # It can be seen that this sequence (starting at 13 and # finishing at 1) contains 10 terms. Although it has not # been proved yet (Collatz Problem), it is thought that all # starting numbers finish at 1. Which starting number, # under one million, produces the longest chain? # NOTE: Once the chain starts the terms are allowed to go # above one million. import time start = time.time() def collatz_length(n): # 0 and 1 return self as length if n <= 1: return n length = 1 while (n != 1): if (n % 2 == 0): n /= 2 else: n = 3*n + 1 length += 1 return length starting_number, longest_chain = 1, 0 for x in xrange(1, 1000001): l = collatz_length(x) if l > longest_chain: starting_number, longest_chain = x, l print starting_number print longest_chain # Slow 31 seconds print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • Output = MAXDOP 1

    - by Dave Ballantyne
    It is widely know that data modifications on table variables do not support parallelism, Peter Larsson has a good example of that here .  Whilst tracking down a performance issue,  I saw that using the OUTPUT clause also causes parallelism to not be used. By way of example,  first lets create two tables with a simple parent and child (one to one) relationship, and then populate them with 100,000 rows. Drop table ParentDrop table Childgocreate table Parent(id integer identity Primary Key,data1 char(255))Create Table Child(id integer Primary Key)goinsert into Parent(data1)Select top 1000000 NULL from sys.columns a cross join sys.columns b insert into ChildSelect id from Parentgo If we then execute update Parent set data1 =''from Parentjoin Child on Parent.Id = Child.Id where Parent.Id %100 =1 and Child.id %100 =1 We should see an execution plan using parallelism such as   However,  if the OUTPUT clause is now used update Parent set data1 =''output inserted.idfrom Parentjoin Child on Parent.Id = Child.Id where Parent.Id %100 =1 and Child.id %100 =1   The execution plan shows that Parallelism was not used Make of that what you will, but i thought that this was a pretty unexpected outcome. Update : Laurence Hoff has mailed me to note that when the OUTPUT results are captured to a temporary table using the INTO clause,  then parallelism is used.  Naturally if you use a table variable then there is still no parallelism  

    Read the article

  • Getting Started with TypeScript – Classes, Static Types and Interfaces

    - by dwahlin
    I had the opportunity to speak on different JavaScript topics at DevConnections in Las Vegas this fall and heard a lot of interesting comments about JavaScript as I talked with people. The most frequent comment I heard from people was, “I guess it’s time to start learning JavaScript”. Yep – if you don’t already know JavaScript then it’s time to learn it. As HTML5 becomes more and more popular the amount of JavaScript code written will definitely increase. After all, many of the HTML5 features available in browsers have little to do with “tags” and more to do with JavaScript (web workers, web sockets, canvas, local storage, etc.). As the amount of JavaScript code being used in applications increases, it’s more important than ever to structure the code in a way that’s maintainable and easy to debug. While JavaScript patterns can certainly be used (check out my previous posts on the subject or my course on Pluralsight.com), several alternatives have come onto the scene such as CoffeeScript, Dart and TypeScript. In this post I’ll describe some of the features TypeScript offers and the benefits that they can potentially offer enterprise-scale JavaScript applications. It’s important to note that while TypeScript has several great features, it’s definitely not for everyone or every project especially given how new it is. The goal of this post isn’t to convince you to use TypeScript instead of standard JavaScript….I’m a big fan of JavaScript. Instead, I’ll present several TypeScript features and let you make the decision as to whether TypeScript is a good fit for your applications. TypeScript Overview Here’s the official definition of TypeScript from the http://typescriptlang.org site: “TypeScript is a language for application-scale JavaScript development. TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. Any browser. Any host. Any OS. Open Source.” TypeScript was created by Anders Hejlsberg (the creator of the C# language) and his team at Microsoft. To sum it up, TypeScript is a new language that can be compiled to JavaScript much like alternatives such as CoffeeScript or Dart. It isn’t a stand-alone language that’s completely separate from JavaScript’s roots though. It’s a superset of JavaScript which means that standard JavaScript code can be placed in a TypeScript file (a file with a .ts extension) and used directly. That’s a very important point/feature of the language since it means you can use existing code and frameworks with TypeScript without having to do major code conversions to make it all work. Once a TypeScript file is saved it can be compiled to JavaScript using TypeScript’s tsc.exe compiler tool or by using a variety of editors/tools. TypeScript offers several key features. First, it provides built-in type support meaning that you define variables and function parameters as being “string”, “number”, “bool”, and more to avoid incorrect types being assigned to variables or passed to functions. Second, TypeScript provides a way to write modular code by directly supporting class and module definitions and it even provides support for custom interfaces that can be used to drive consistency. Finally, TypeScript integrates with several different tools such as Visual Studio, Sublime Text, Emacs, and Vi to provide syntax highlighting, code help, build support, and more depending on the editor. Find out more about editor support at http://www.typescriptlang.org/#Download. TypeScript can also be used with existing JavaScript frameworks such as Node.js, jQuery, and others and even catch type issues and provide enhanced code help. Special “declaration” files that have a d.ts extension are available for Node.js, jQuery, and other libraries out-of-the-box. Visit http://typescript.codeplex.com/SourceControl/changeset/view/fe3bc0bfce1f#samples%2fjquery%2fjquery.d.ts for an example of a jQuery TypeScript declaration file that can be used with tools such as Visual Studio 2012 to provide additional code help and ensure that a string isn’t passed to a parameter that expects a number. Although declaration files certainly aren’t required, TypeScript’s support for declaration files makes it easier to catch issues upfront while working with existing libraries such as jQuery. In the future I expect TypeScript declaration files will be released for different HTML5 APIs such as canvas, local storage, and others as well as some of the more popular JavaScript libraries and frameworks. Getting Started with TypeScript To get started learning TypeScript visit the TypeScript Playground available at http://www.typescriptlang.org. Using the playground editor you can experiment with TypeScript code, get code help as you type, and see the JavaScript that TypeScript generates once it’s compiled. Here’s an example of the TypeScript playground in action:   One of the first things that may stand out to you about the code shown above is that classes can be defined in TypeScript. This makes it easy to group related variables and functions into a container which helps tremendously with re-use and maintainability especially in enterprise-scale JavaScript applications. While you can certainly simulate classes using JavaScript patterns (note that ECMAScript 6 will support classes directly), TypeScript makes it quite easy especially if you come from an object-oriented programming background. An example of the Greeter class shown in the TypeScript Playground is shown next: class Greeter { greeting: string; constructor (message: string) { this.greeting = message; } greet() { return "Hello, " + this.greeting; } } Looking through the code you’ll notice that static types can be defined on variables and parameters such as greeting: string, that constructors can be defined, and that functions can be defined such as greet(). The ability to define static types is a key feature of TypeScript (and where its name comes from) that can help identify bugs upfront before even running the code. Many types are supported including primitive types like string, number, bool, undefined, and null as well as object literals and more complex types such as HTMLInputElement (for an <input> tag). Custom types can be defined as well. The JavaScript output by compiling the TypeScript Greeter class (using an editor like Visual Studio, Sublime Text, or the tsc.exe compiler) is shown next: var Greeter = (function () { function Greeter(message) { this.greeting = message; } Greeter.prototype.greet = function () { return "Hello, " + this.greeting; }; return Greeter; })(); Notice that the code is using JavaScript prototyping and closures to simulate a Greeter class in JavaScript. The body of the code is wrapped with a self-invoking function to take the variables and functions out of the global JavaScript scope. This is important feature that helps avoid naming collisions between variables and functions. In cases where you’d like to wrap a class in a naming container (similar to a namespace in C# or a package in Java) you can use TypeScript’s module keyword. The following code shows an example of wrapping an AcmeCorp module around the Greeter class. In order to create a new instance of Greeter the module name must now be used. This can help avoid naming collisions that may occur with the Greeter class.   module AcmeCorp { export class Greeter { greeting: string; constructor (message: string) { this.greeting = message; } greet() { return "Hello, " + this.greeting; } } } var greeter = new AcmeCorp.Greeter("world"); In addition to being able to define custom classes and modules in TypeScript, you can also take advantage of inheritance by using TypeScript’s extends keyword. The following code shows an example of using inheritance to define two report objects:   class Report { name: string; constructor (name: string) { this.name = name; } print() { alert("Report: " + this.name); } } class FinanceReport extends Report { constructor (name: string) { super(name); } print() { alert("Finance Report: " + this.name); } getLineItems() { alert("5 line items"); } } var report = new FinanceReport("Month's Sales"); report.print(); report.getLineItems();   In this example a base Report class is defined that has a variable (name), a constructor that accepts a name parameter of type string, and a function named print(). The FinanceReport class inherits from Report by using TypeScript’s extends keyword. As a result, it automatically has access to the print() function in the base class. In this example the FinanceReport overrides the base class’s print() method and adds its own. The FinanceReport class also forwards the name value it receives in the constructor to the base class using the super() call. TypeScript also supports the creation of custom interfaces when you need to provide consistency across a set of objects. The following code shows an example of an interface named Thing (from the TypeScript samples) and a class named Plane that implements the interface to drive consistency across the app. Notice that the Plane class includes intersect and normal as a result of implementing the interface.   interface Thing { intersect: (ray: Ray) => Intersection; normal: (pos: Vector) => Vector; surface: Surface; } class Plane implements Thing { normal: (pos: Vector) =>Vector; intersect: (ray: Ray) =>Intersection; constructor (norm: Vector, offset: number, public surface: Surface) { this.normal = function (pos: Vector) { return norm; } this.intersect = function (ray: Ray): Intersection { var denom = Vector.dot(norm, ray.dir); if (denom > 0) { return null; } else { var dist = (Vector.dot(norm, ray.start) + offset) / (-denom); return { thing: this, ray: ray, dist: dist }; } } } }   At first glance it doesn’t appear that the surface member is implemented in Plane but it’s actually included automatically due to the public surface: Surface parameter in the constructor. Adding public varName: Type to a constructor automatically adds a typed variable into the class without having to explicitly write the code as with normal and intersect. TypeScript has additional language features but defining static types and creating classes, modules, and interfaces are some of the key features it offers. So is TypeScript right for you and your applications? That’s a not a question that I or anyone else can answer for you. You’ll need to give it a spin to see what you think. In future posts I’ll discuss additional details about TypeScript and how it can be used with enterprise-scale JavaScript applications. In the meantime, I’m in the process of working with John Papa on a new Typescript course for Pluralsight that we hope to have out in December of 2012.

    Read the article

  • I want to turn VB.Net Option Strict On

    - by asjohnson
    I recently found out about strong typing in VB.Net (naturally it was on here, thanks!) and am deciding I should take another step toward being a better programmer. I went from vba macros - VB.Net, because I needed a program that I could automate and I never read anything about strong typing, so I kind of fell into the VB.Net default trap. Now I am looking to turn it on and sort out this whole type thing. I was hoping someone could direct me towards some resources to make this transistion as painless as possible. I have read around some and ctype seems to come up a lot, but past that I am at a bit of a loss. What are the benefits of switching? Is there more to it than just using ctype to cast things? I feel like there is a good article that I have failed to come across and any direction would be great. Would a good approach to be to rewrite a program that is written with option strict off and note differences?

    Read the article

  • OpenVPN Push DNS Not Working Correctly On Windows

    - by woodsbw
    I currently have OpenVPN server setup on an Ubuntu machine, as well as DNSMasq. I am wanting to push DNS to the client (road warrior setup.) I had the push "dhcp-option DNS x.x.x.x" where x.x.x.x was an open OpenDNS server, for testing, and everything was working when I connected from my Windows client But now that I have DNSMasq setup, and I changed the "dhcp-option DNS x.x.x.x" to the DNSMasq server, but when they client connects it still receives the old, OpenDNS DNS server IP. I'm at a bit of a loss here, I have tried flushing DNS on the client, rebooting the server, and I even grep'd the entire server to see if the OpenDNS IP was in some other config I was missing...it wasn't. One other note, when connect to the VPN and explicitly run nslookup against against the DNSMasq IP, the addresses resolve correctly, so it isn't a DNSMasq issue.

    Read the article

  • Oracle ADF at Oracle OpenWorld 2012

    - by Shay Shmeltzer
    This year is going to be very busy for Oracle ADF developers who'll attend Oracle Open World. Check out the list of Oracle ADF related sessions, labs, demos and other Oracle ADF activities.  This list will help you not to miss any ADF related activity. We have over 50 ADF related sessions, multiple labs including new ones on ADF Mobile, Application Life Cycle Management and ADF in Eclipse, we'll have several demo booths where you can meet product managers, and we'll be featured in several keynotes as well. While we have several "beginners" sessions, you'll find that we have a lot of in-depth technical sessions and sessions that cover best-practices too. Of course, it is not just us product managers presenting about Oracle ADF, there are a lot of Oracle ADF sessions presented by customers, Oracle ACEs, and other developers. So you can learn from the experience of real life implementations. Note that the ADF content starts early on Sunday with a full set of Oracle ADF sessions arranged for you by the Oracle ADF Enterprise Methodology Group - so plan your trip accordingly and be there early Sunday morning. First thing on Monday morning, don't miss the keynote for Oracle ADF developers at 10:45 at the Marriott Marquis - Salon 8 - "The Future of Development for Oracle Fusion—From Desktop to Mobile to Cloud". We are also arranging a meet-up of developers using Oracle ADF at the OTN Lounge on Wed at 4:30pm - and we would love to meet you there - this will also give you an opportunity to meet other Oracle ADF users and members of the community. And after that we can all head over to the big Wed party to see Pearl Jam and Kings of Leon. One recommendation for those who are already registered - start planning your schedule and booking your place in the sessions now through the schedule builder. This will guarantee that you won't be left out of sessions you want to attend due room size limitations. Oracle OpenWorld 2013 will be a must attend event for serious Oracle ADF developers - don't miss it.

    Read the article

  • SQLAuthority News – Download SQL Server 2012 SP1 CTP4

    - by pinaldave
    There are few trends I often see in the industry, for example i) running servers on n-1 version ii) wait till SP1 to released to adopt the product. Microsoft has recently released SQL Server 2012 SP1 CTP4. CTP stands for Community Technology Preview and it is not the final version yet. The SQL Server 2012 SP1 CTP release is available for testing purposes and use on non-production environments. What’s new for SQL Server 2012 SP1: AlwaysOn Availability Group OS Upgrade: Selective XML Index FIX: DBCC SHOW_STATISTICS works with SELECT permission New dynamic function returns statistics properties SSMS Complete in Express SlipStream Full installation Business Intelligence Excel Update You can download SQL Server 2012 SP1 CTP4 from here. SQL Server 2012 SP1 CTP4 feature pack is available for download from here. Additionally, SQL Server 2012 SP1 CTP Express is available to download as well from here. Note that SQL Server 2012 SP1 CTP has SSMS as well. Reference: Pinal Dave (http://blog.SQLAuthority.com)       Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Should we have a database independent SQL like query language in Django?

    - by Yugal Jindle
    Note : I know we have Django ORM already that keeps things database independent and converts to the database specific SQL queries. Once things starts getting complicated it is preferred to write raw SQL queries for better efficiency. When you write raw sql queries your code gets trapped with the database you are using. I also understand its important to use the full power of your database that can-not be achieved with the django orm alone. My Question : Until I use any database specific feature, why should one be trapped with the database. For instance : We have a query with multiple joins and we decided to write a raw sql query. Now, that makes my website postgres specific. Even when I have not used any postgres specific feature. I feel there should be some fake sql language which can translate to any database's sql query. Even Django's ORM can be built over it. So, that if you go out of ORM but not database specific - you can still remain database independent. I asked the same question to Jacob Kaplan Moss (In person) : He advised me to stay with the database that I like and endure its whole power, to which I agree. But my point was not that we should be database independent. My point is we should be database independent until we use a database specific feature. Please explain, why should be there a fake sql layer over the actual sql ?

    Read the article

  • Solaris: What comes next?

    - by alanc
    As you probably know by now, a few months ago, we released Solaris 11 after years of development. That of course means we now need to figure out what comes next - if Solaris 11 is “The First Cloud OS”, then what do we need to make future releases of Solaris be, to be modern and competitive when they're released? So we've been having planning and brainstorming meetings, and I've captured some notes here from just one of those we held a couple weeks ago with a number of the Silicon Valley based engineers. Now before someone sees an idea here and calls their product rep wanting to know what's up, please be warned what follows are rough ideas, and as I'll discuss later, none of them have any committment, schedule, working code, or even plan for integration in any possible future product at this time. (Please don't make me force you to read the full Oracle future product disclaimer here, you should know it by heart already from the front of every Oracle product slide deck.) To start with, we did some background research, looking at ideas from other Oracle groups, and competitive OS'es. We examined what was hot in the technology arena and where the interesting startups were heading. We then looked at Solaris to see where we could apply those ideas. Making Network Admins into Socially Networking Admins We all know an admin who has grumbled about being the only one stuck late at work to fix a problem on the server, or having to work the weekend alone to do scheduled maintenance. But admins are humans (at least most are), and crave companionship and community with their fellow humans. And even when they're alone in the server room, they're never far from a network connection, allowing access to the wide world of wonders on the Internet. Our solution here is not building a new social network - there's enough of those already, and Oracle even has its own Oracle Mix social network already. What we proposed is integrating Solaris features to help engage our system admins with these social networks, building community and bringing them recognition in the workplace, using achievement recognition systems as found in many popular gaming platforms. For instance, if you had a Facebook account, and a group of admin friends there, you could register it with our Social Network Utility For Facebook, and then your friends might see: Alan earned the achievement Critically Patched (April 2012) for patching all his servers. Matt is only at 50% - encourage him to complete this achievement today! To avoid any undue risk of advertising who has unpatched servers that are easier targets for hackers to break into, this information would be tightly protected via Facebook's world-renowned privacy settings to avoid it falling into the wrong hands. A related form of gamification we considered was replacing simple certfications with role-playing-game-style Experience Levels. Instead of just knowing an admin passed a test establishing a given level of competency, these would provide recruiters with a more detailed level of how much real-world experience an admin has. Achievements such as the one above would feed into it, but larger numbers of experience points would be gained by tougher or more critical tasks - such as recovering a down system, or migrating a service to a new platform. (As long as it was an Oracle platform of course - migrating to an HP or IBM platform would cause the admin to lose points with us.) Unfortunately, we couldn't figure out a good way to prevent (if you will) “gaming” the system. For instance, a disgruntled admin might decide to start ignoring warnings from FMA that a part is beginning to fail or skip preventative maintenance, in the hopes that they'd cause a catastrophic failure to earn more points for bolstering their resume as they look for a job elsewhere, and not worrying about the effect on your business of a mission critical server going down. More Z's for ZFS Our suggested new feature for ZFS was inspired by the worlds most successful Z-startup of all time: Zynga. Using the Social Network Utility For Facebook described above, we'd tie it in with ZFS monitoring to help you out when you find yourself in a jam needing more disk space than you have, and can't wait a month to get a purchase order through channels to buy more. Instead with the click of a button you could post to your group: Alan can't find any space in his server farm! Can you help? Friends could loan you some space on their connected servers for a few weeks, knowing that you'd return the favor when needed. ZFS would create a new filesystem for your use on their system, and securely share it with your system using Kerberized NFS. If none of your friends have space, then you could buy temporary use space in small increments at affordable rates right there in Facebook, using your Facebook credits, and then file an expense report later, after the urgent need has passed. Universal Single Sign On One thing all the engineers agreed on was that we still had far too many "Single" sign ons to deal with in our daily work. On the web, every web site used to have its own password database, forcing us to hope we could remember what login name was still available on each site when we signed up, and which unique password we came up with to avoid having to disclose our other passwords to a new site. In recent years, the web services world has finally been reducing the number of logins we have to manage, with many services allowing you to login using your identity from Google, Twitter or Facebook. So we proposed following their lead, introducing PAM modules for web services - no more would you have to type in whatever login name IT assigned and try to remember the password you chose the last time password aging forced you to change it - you'd simply choose which web service you wanted to authenticate against, and would login to your Solaris account upon reciept of a cookie from their identity service. Pinning notes to the cloud We also all noted that we all have our own pile of notes we keep in our daily work - in text files in our home directory, in notebooks we carry around, on white boards in offices and common areas, on sticky notes on our monitors, or on scraps of paper pinned to our bulletin boards. The contents of the notes vary, some are things just for us, some are useful for our groups, some we would share with the world. For instance, when our group moved to a new building a couple years ago, we had a white board in the hallway listing all the NIS & DNS servers, subnets, and other network configuration information we needed to set up our Solaris machines after the move. Similarly, as Solaris 11 was finishing and we were all learning the new network configuration commands, we shared notes in wikis and e-mails with our fellow engineers. Users may also remember one of the popular features of Sun's old BigAdmin site was a section for sharing scripts and tips such as these. Meanwhile, the online "pin board" at Pinterest is taking the web by storm. So we thought, why not mash those up to solve this problem? We proposed a new BigAddPin site where users could “pin” notes, command snippets, configuration information, and so on. For instance, once they had worked out the ideal Automated Installation manifest for their app server, they could pin it up to share with the rest of their group, or choose to make it public as an example for the world. Localized data, such as our group's notes on the servers for our subnet, could be shared only to users connecting from that subnet. And notes that they didn't want others to see at all could be marked private, such as the list of phone numbers to call for late night pizza delivery to the machine room, the birthdays and anniversaries they can never remember but would be sleeping on the couch if they forgot, or the list of automatically generated completely random, impossible to remember root passwords to all their servers. For greater integration with Solaris, we'd put support right into the command shells — redirect output to a pinned note, set your path to include pinned notes as scripts you can run, or bring up your recent shell history and pin a set of commands to save for the next time you need to remember how to do that operation. Location service for Solaris servers A longer term plan would involve convincing the hardware design groups to put GPS locators with wireless transmitters in future server designs. This would help both admins and service personnel trying to find servers in todays massive data centers, and could feed into location presence apps to help show potential customers that while they may not see many Solaris machines on the desktop any more, they are all around. For instance, while walking down Wall Street it might show “There are over 2000 Solaris computers in this block.” [Note: this proposal was made before the recent media coverage of a location service aggregrator app with less noble intentions, and in hindsight, we failed to consider what happens when such data similarly falls into the wrong hands. We certainly wouldn't want our app to be misinterpreted as “There are over $20 million dollars of SPARC servers in this building, waiting for you to steal them.” so it's probably best it was rejected.] Harnessing the power of the GPU for Security Most modern OS'es make use of the widespread availability of high powered GPU hardware in today's computers, with desktop environments requiring 3-D graphics acceleration, whether in Ubuntu Unity, GNOME Shell on Fedora, or Aero Glass on Windows, but we haven't yet made Solaris fully take advantage of this, beyond our basic offering of Compiz on the desktop. Meanwhile, more businesses are interested in increasing security by using biometric authentication, but must also comply with laws in many countries preventing discrimination against employees with physical limations such as missing eyes or fingers, not to mention the lost productivity when employees can't login due to tinted contacts throwing off a retina scan or a paper cut changing their fingerprint appearance until it heals. Fortunately, the two groups considering these problems put their heads together and found a common solution, using 3D technology to enable authentication using the one body part all users are guaranteed to have - pam_phrenology.so, a new PAM module that uses an array USB attached web cams (or just one if the user is willing to spin their chair during login) to take pictures of the users head from all angles, create a 3D model and compare it to the one in the authentication database. While Mythbusters has shown how easy it can be to fool common fingerprint scanners, we have not yet seen any evidence that people can impersonate the shape of another user's cranium, no matter how long they spend beating their head against the wall to reshape it. This could possibly be extended to group users, using modern versions of some of the older phrenological studies, such as giving all users with long grey beards access to the System Architect role, or automatically placing users with pointy spikes in their hair into an easy use mode. Unfortunately, there are still some unsolved technical challenges we haven't figured out how to overcome. Currently, a visit to the hair salon causes your existing authentication to expire, and some users have found that shaving their heads is the only way to avoid bad hair days becoming bad login days. Reaction to these ideas After gathering all our notes on these ideas from the engineering brainstorming meeting, we took them in to present to our management. Unfortunately, most of their reaction cannot be printed here, and they chose not to accept any of these ideas as they were, but they did have some feedback for us to consider as they sent us back to the drawing board. They strongly suggested our ideas would be better presented if we weren't trying to decipher ink blotches that had been smeared by the condensation when we put our pint glasses on the napkins we were taking notes on, and to that end let us know they would not be approving any more engineering offsites in Irish themed pubs on the Friday of a Saint Patrick's Day weekend. (Hopefully they mean that situation specifically and aren't going to deny the funding for travel to this year's X.Org Developer's Conference just because it happens to be in Bavaria and ending on the Friday of the weekend Oktoberfest starts.) They recommended our research techniques could be improved over just sitting around reading blogs and checking our Facebook, Twitter, and Pinterest accounts, such as considering input from alternate viewpoints on topics such as gamification. They also mentioned that Oracle hadn't fully adopted some of Sun's common practices and we might have to try harder to get those to be accepted now that we are one unified company. So as I said at the beginning, don't pester your sales rep just yet for any of these, since they didn't get approved, but if you have better ideas, pass them on and maybe they'll get into our next batch of planning.

    Read the article

  • E-Business Tax Release 12 Setup - US Location Based Taxes Part 1, Prerequisities & Regimes

    - by Robert Story
    Upcoming WebcastTitle: E-Business Tax Release 12 Setup - US Location Based Taxes Part 1, Prerequisities & RegimesDate: April 28, 2010 Time: 12:00 pm EDT Product Family: Receivables Community Summary This one-hour session is part one of two on setting up a fresh implementation of US Location Based Taxes in Oracle E-Business Tax.  It is recommended for functional users who wish to understand the steps involved in setting up E-Business Tax in Release 12. Topics will include: Overview of E-Business TaxLocation setupRegime to Rate FlowTax RegimesTaxesTax StatusesTax JurisdictionsTax Recovery RatesTax RatesSubscribing the Operation Unit to a Regime to Rate FlowBrief Demonstration A short, live demonstration (only if applicable) and question and answer period will be included. Click here to register for this session....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • ps ux on OSX shows user for ps command to be root? Is this normal?

    - by snies
    I am running OS X 10.6.1 . When i am logged in as a normal user of group staff and do a ps ux it lists my ps ux command as being run by root: snies 181 0.0 0.3 2774328 12500 ?? S 6:00PM 0:20.96 /System/Library... root 1673 0.0 0.0 2434788 508 s001 R+ 8:16AM 0:00.00 ps ux snies 177 0.0 0.0 2457208 984 ?? Ss 6:00PM 0:00.52 /sbin/launchd snies 1638 0.0 0.0 2435468 1064 s001 S 8:13AM 0:00.03 -bash Is this normal behaviour? And if so why? Please note that the user is not an Administrator account and is not able to sudo.

    Read the article

  • Gratuitous CRLF in Subject: line - why is it there, and is it legal?

    - by MadHatter
    I'm running into a problem with a NAGIOS system sending emails to a popular email-to-SMS service. The email-to-SMS service takes emails with text in the Subject: line, and sends them on to the mobile number encoded in the To: field. So far so good. Sadly, sendmail (and postfix before it) seem to be inserting a gratuitous CRLF into the (necessarily long) Subject: line, and that's causing my SMS messages to be truncated at the CRLF if and only if the Subject: line contains one or more colons past the gratuitous CRLF. I am confident that the messages are being created correctly, but just to be sure, here's me creating a completely noddy test message to myself, with a long Subject: line: echo "foo" | mail -s "1234567 101234567 201234567 301234567 401234567 501234567 601234567 701234567 801234567 90123456789" [email protected] Note there's no extra colon in this Subject: line; all I'm doing here is showing that an extra CRLF is inserted on the wire. Here's the result of sudo ngrep -x port 25: 44 61 74 65 3a 20 46 72    69 2c 20 33 31 20 4d 61    Date: Fri, 31 Ma 79 20 32 30 31 33 20 31    30 3a 34 33 3a 35 35 20    y 2013 10:43:55 2b 30 31 30 30 0d 0a 54    6f 3a 20 72 65 61 70 65    +0100..To: reape 72 40 74 65 61 70 61 72    74 79 2e 6e 65 74 0d 0a    [email protected].. 53 75 62 6a 65 63 74 3a    20 31 32 33 34 35 36 37    Subject: 1234567 20 31 30 31 32 33 34 35    36 37 20 32 30 31 32 33     101234567 20123 34 35 36 37 20 33 30 31    32 33 34 35 36 37 20 34    4567 301234567 4 30 31 32 33 34 35 36 37    20 35 30 31 32 33 34 35    01234567 5012345 36 37 0d 0a 20 36 30 31    32 33 34 35 36 37 20 37    67.. 601234567 7 30 31 32 33 34 35 36 37    20 38 30 31 32 33 34 35    01234567 8012345 36 37 20 39 30 31 32 33    34 35 36 37 38 39 0d 0a    67 90123456789.. 55 73 65 72 2d 41 67 65    6e 74 3a 20 48 65 69 72    User-Agent: Heir 6c 6f 6f 6d 20 6d 61 69    6c 78 20 31 32 2e 34 20    loom mailx 12.4 37 2f 32 39 2f 30 38 0d    0a 4d 49 4d 45 2d 56 65    7/29/08..MIME-Ve 72 73 69 6f 6e 3a 20 31    2e 30 0d 0a 43 6f 6e 74    rsion: 1.0..Cont 65 6e 74 2d 54 79 70 65    3a 20 74 65 78 74 2f 70    ent-Type: text/p 6c 61 69 6e 3b 20 63 68    61 72 73 65 74 3d 75 73    lain; charset=us About half way down (marked in bold+italic), between the 501234567 and the 601234567 in the original Subject: header, you can see a CRLF being inserted (0x0d 0x0a, on the left-hand side hex dump, .. on the right-hand side plain text). The receiving MTA seems happy to post-process this, and when I look at the on-disc stored mail at the receiving end, I see only a LF (0x0a) in the Subject: line, and the line is parsed correctly and in its entirety by, eg, alpine. Nevertheless, the CRLF is there on the wire, and between me and the (excellent) email-to-SMS support people, we've established that these are the cause of the problem. So my question is: is it lawful for an MTA to insert a gratuitous CRLF on the wire? If it is, and I can prove it, then it's the email-to-SMS house's problem, because they are being intolerant. If it isn't, or it is but I can't prove it, then it becomes my problem, so an answer with references would be most useful. Edit: I can now come clean that the email-to-SMS service in question is kapow. Once this problem was explained to them, they got it, worked with me to develop and test a fix, and have deployed the fix. My long subject lines with colons in now get relayed correctly into SMSes. I don't normally trumpet individual companies, especially not on SF, but I thought it worthy of note that kapow Did The Right Thing. (Disclaimer: I have no connection with kapow except as a paying customer who's happy about the way they dealt with his problem.)

    Read the article

  • Geometry Shader: distortions

    - by Christophe Lionet
    This is a cross-question from Stack Overflow, I thought it would be more appropriate here. There is a lot of code I could be posting. To avoid overloading the page with code, I will post any part of the code if requested. I am working from the ParticleGS DirectX10 sample, to build a geometry shader based particle system in DirectX 11. Using the sample code, and changing it to my liking, I am able to draw a single quad (which is essentially one particle constantly recreating itself). However, I noticed a problem which was similar to one I once had: the rendered shape is distorted. Here is a video showcasing what is happening. http://youtu.be/6NY_hxjMfwY Now, I used to have this issue when using several effects together, when I realised that I needed to explicitely set the geometry shader to null for the other effects. I solved this problem, as you can see in the video, as the rest of the scene is drawing properly. Note that some sides are being culled somehow, although I turned off culling in my main render state. The texturing is fine too, the texture draws with appropriate proportions relative to the quad. I really don't see what I could be doing wrong here... what would cause the geometry shader to behave in such a way? Again, I will post any piece code you will request.

    Read the article

  • SQL Contest – Result of Cartoon Contest

    - by pinaldave
    Earlier we had an excellent contest ran with the help of Embarcadero Technologies. We had two different contests on the same day sponsored by the kind folks at Embarcadero. Here are the details of the winners. 1) Win USD 25 Amazon Gift Cards (10 Units) We had announced that we will award USD 25 Amazon Gift Cards to 10 lucky winners who will download the DB Optimizer between Nov 29 to Dec 8. Here is the name of the winners. Winners will get Amazon Gift Cards USD 25 in the next 5 days of this blog post to their registered email address. If you do not receive the card, do send me email (Pinal at sqlauthority.com) and I will follow up on the details. Name of the winners: Ramdas Narayanan Krishna Uppuluri Donna Kray Santosh Gupta Robert Small Samit Bhatt Bernd Baumanns Rodrigo Oriola Jim Woodin Alfred Sandou 2) Win Star Wars R2-D2 Inflatable R/C We had cartoon contest. If you have not read the cartoon – I suggest you go over this cartoon story one more time. The task was to give the correct answer with some interesting note along with it. We selected a few good quotes and put them together. We later on picked the winner by using random algorithm. The winner gets fantastic Star Wars R2-D2 Inflatable R/C. Name of the winner: Aadhar Joshi. He wins R2-D2. You can read his comment over here. Thank you all for participating in the contest – this was fun – if you have liked it do let me know and we will come up with something new for you next time. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Camera wont stay behind model after pitch, then rotation

    - by ChocoMan
    I have a camera position behind a model. Currently, if I push the left thumbstick making my model move forward, backward, or strafe, the camera stays with the model. If I push the right thumbstick left or right, the model rotates in those directions fine along with the camera rotating while maintaining its position relatively behind the model. But when I pitch the model up or down, then rotate the model afterwards, the camera moves slightly rotates in a clock-like fashion behind the model. If I do a few rotations of the model and try to pitch the camera, the camera will eventually be looking at the side, then eventually the front of the model while also rotating in a clock-like fashion. My question is, how do I keep the camera to pitch up and down behind the model no matter how much the model has rotated? Here is what I got: // Rotates model and pitches camera on its own axis public void modelRotMovement(GamePadState pController) { // Rotates Camera with model Yaw = pController.ThumbSticks.Right.X * MathHelper.ToRadians(angularSpeed); // Pitches Camera around model Pitch = pController.ThumbSticks.Right.Y * MathHelper.ToRadians(angularSpeed); AddRotation = Quaternion.CreateFromYawPitchRoll(Yaw, 0, 0); ModelLoad.MRotation *= AddRotation; MOrientation = Matrix.CreateFromQuaternion(ModelLoad.MRotation); } // Orbit (yaw) Camera around with model (only seeing back of model) public void cameraYaw(Vector3 axisYaw, float yaw) { ModelLoad.CameraPos = Vector3.Transform(ModelLoad.CameraPos - ModelLoad.camTarget, Matrix.CreateFromAxisAngle(axisYaw, yaw)) + ModelLoad.camTarget; } // Raise camera above or below model's shoulders public void cameraPitch(Vector3 axisPitch, float pitch) { ModelLoad.CameraPos = Vector3.Transform(ModelLoad.CameraPos - ModelLoad.camTarget, Matrix.CreateFromAxisAngle(axisPitch, pitch)) + ModelLoad.camTarget; } // Call in update method public void updateCamera() { cameraYaw(Vector3.Up, Yaw); cameraPitch(Vector3.Right, Pitch); } NOTE: I tried to use addPitch just like addRotation but it didn't work...

    Read the article

  • Save Upgrade downtime: Upgrade APEX upfront

    - by Mike Dietrich
    With almost every patch or release upgrade of the Oracle Database a new version of Oracle Application Express (APEX) will be installed. And as APEX is part of the database installation it will be upgraded as part of the component upgrades after the ORACLE SERVER component has been successfully upgraded to the new releases. But the APEX upgrade can take a bit (several minutes or even more in some cases). Therefore it is a common advice to upgrade APEX upfront before upgrading the database as this can be done online while the database is in production (unless your databases serves just as an APEX application backend - in this case upgrading APEX upfront won't save you anything). To upgrade Oracle APEX upfront you'll have to followMOS Note:1088970.1. It explains that you'll have to: Determine the installation type by running this query:select count(*) from <SCHEMA>.WWV_FLOWS where id = 4000;whereas <SCHEMA> can be one of the following:FLOWS_010500 1.5.X FLOWS_010600 1.6.X FLOWS_020000 2.0.X FLOWS_020100 2.1.X FLOWS_020200 2.2.X FLOWS_030000 3.0.X FLOWS_030100 3.1.X  APEX_030200 3.2.X APEX_040000 4.0.XAPEX_040100 4.1.XAPEX_040200 4.2.XIf the query returns 0 then you'll need to run apxrtins.sqlIf the query returns 1 then you'll need to execute apexins.sql Download the newest APEX package and install it. -Mike . 

    Read the article

  • How long should diskpart take?

    - by sam
    I am using diskpart to extend a drive that is actually a VHD. I've already extended the VHD. It's on Windows 2003 and the C drive doesn't contain the swap file and the available space is contiguous. However didn't see the Note about the Resource Kit diskpart for download is not for Windows 2003. So I did the extend using the Windows 2000 version. Not sure if this is the reason but Diskpart is sitting there now for about 15 minutes or so and it's only gotta extend by 10GB. Should it be taking this long? Am I asking for trouble now that I've used a Windows 2000 version of diskpart on a Windows 2003 machine (VM)?

    Read the article

  • Social Analytics in your current data

    - by Dan McGrath
    By now everyone is aware of the massive boom in social-networking (Twitter, Facebook, LinkedIn) and obviously a big part of its business model revolves around being able to mine this data to create information that can be used to make money for someone. Gartner has identified 'Social Analytics' as one of the top 10 strategic technologies for 2011. Has anyone looked at their existing data structures to determine if they could extract a social graph and then perform further data mining against this? How does it fit in with your other strategic development strategies? What information are you trying to extract from the data? Take for example, a bank. They could conceivably determine a social graph through account relationships and transactions. Obviously there would be open edges on the graph where funds enter/leave the institute, but that shouldn't detract from the usefulness of the data. I'm looking for actual examples with the answers, as well as why/how they did it. References to other sites will be greatly appreciated. Note: I'm not at all referring to mining data out of actual social networks.

    Read the article

  • Sync iPhone Mail with Webmail

    - by João Paulin
    I had an email account [email protected] hosted on Host A. This mailbox had 100 messages. I wanted to migrate to Host B, so I download all the 100 messages from Host A on my iPhone. Now that my site was successfully migrated to Host B and the email account [email protected] was created again (the mailbox is empty), how can I send the messages that I have downloaded on my iPhone to the mailbox on Host B? Note that the migration from Host A to Host B did not change the IMAP and SMTP adressess and parameters. I'm still using the same addresses, parameters and ports as before. The email accounts just switched hosting.

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >