Search Results

Search found 11993 results on 480 pages for 'define syntax'.

Page 257/480 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • Parallel Classloading Revisited: Fully Concurrent Loading

    - by davidholmes
    Java 7 introduced support for parallel classloading. A description of that project and its goals can be found here: http://openjdk.java.net/groups/core-libs/ClassLoaderProposal.html The solution for parallel classloading was to add to each class loader a ConcurrentHashMap, referenced through a new field, parallelLockMap. This contains a mapping from class names to Objects to use as a classloading lock for that class name. This was then used in the following way: protected Class loadClass(String name, boolean resolve) throws ClassNotFoundException { synchronized (getClassLoadingLock(name)) { // First, check if the class has already been loaded Class c = findLoadedClass(name); if (c == null) { long t0 = System.nanoTime(); try { if (parent != null) { c = parent.loadClass(name, false); } else { c = findBootstrapClassOrNull(name); } } catch (ClassNotFoundException e) { // ClassNotFoundException thrown if class not found // from the non-null parent class loader } if (c == null) { // If still not found, then invoke findClass in order // to find the class. long t1 = System.nanoTime(); c = findClass(name); // this is the defining class loader; record the stats sun.misc.PerfCounter.getParentDelegationTime().addTime(t1 - t0); sun.misc.PerfCounter.getFindClassTime().addElapsedTimeFrom(t1); sun.misc.PerfCounter.getFindClasses().increment(); } } if (resolve) { resolveClass(c); } return c; } } Where getClassLoadingLock simply does: protected Object getClassLoadingLock(String className) { Object lock = this; if (parallelLockMap != null) { Object newLock = new Object(); lock = parallelLockMap.putIfAbsent(className, newLock); if (lock == null) { lock = newLock; } } return lock; } This approach is very inefficient in terms of the space used per map and the number of maps. First, there is a map per-classloader. As per the code above under normal delegation the current classloader creates and acquires a lock for the given class, checks if it is already loaded, then asks its parent to load it; the parent in turn creates another lock in its own map, checks if the class is already loaded and then delegates to its parent and so on till the boot loader is invoked for which there is no map and no lock. So even in the simplest of applications, you will have two maps (in the system and extensions loaders) for every class that has to be loaded transitively from the application's main class. If you knew before hand which loader would actually load the class the locking would only need to be performed in that loader. As it stands the locking is completely unnecessary for all classes loaded by the boot loader. Secondly, once loading has completed and findClass will return the class, the lock and the map entry is completely unnecessary. But as it stands, the lock objects and their associated entries are never removed from the map. It is worth understanding exactly what the locking is intended to achieve, as this will help us understand potential remedies to the above inefficiencies. Given this is the support for parallel classloading, the class loader itself is unlikely to need to guard against concurrent load attempts - and if that were not the case it is likely that the classloader would need a different means to protect itself rather than a lock per class. Ultimately when a class file is located and the class has to be loaded, defineClass is called which calls into the VM - the VM does not require any locking at the Java level and uses its own mutexes for guarding its internal data structures (such as the system dictionary). The classloader locking is primarily needed to address the following situation: if two threads attempt to load the same class, one will initiate the request through the appropriate loader and eventually cause defineClass to be invoked. Meanwhile the second attempt will block trying to acquire the lock. Once the class is loaded the first thread will release the lock, allowing the second to acquire it. The second thread then sees that the class has now been loaded and will return that class. Neither thread can tell which did the loading and they both continue successfully. Consider if no lock was acquired in the classloader. Both threads will eventually locate the file for the class, read in the bytecodes and call defineClass to actually load the class. In this case the first to call defineClass will succeed, while the second will encounter an exception due to an attempted redefinition of an existing class. It is solely for this error condition that the lock has to be used. (Note that parallel capable classloaders should not need to be doing old deadlock-avoidance tricks like doing a wait() on the lock object\!). There are a number of obvious things we can try to solve this problem and they basically take three forms: Remove the need for locking. This might be achieved by having a new version of defineClass which acts like defineClassIfNotPresent - simply returning an existing Class rather than triggering an exception. Increase the coarseness of locking to reduce the number of lock objects and/or maps. For example, using a single shared lockMap instead of a per-loader lockMap. Reduce the lifetime of lock objects so that entries are removed from the map when no longer needed (eg remove after loading, use weak references to the lock objects and cleanup the map periodically). There are pros and cons to each of these approaches. Unfortunately a significant "con" is that the API introduced in Java 7 to support parallel classloading has essentially mandated that these locks do in fact exist, and they are accessible to the application code (indirectly through the classloader if it exposes them - which a custom loader might do - and regardless they are accessible to custom classloaders). So while we can reason that we could do parallel classloading with no locking, we can not implement this without breaking the specification for parallel classloading that was put in place for Java 7. Similarly we might reason that we can remove a mapping (and the lock object) because the class is already loaded, but this would again violate the specification because it can be reasoned that the following assertion should hold true: Object lock1 = loader.getClassLoadingLock(name); loader.loadClass(name); Object lock2 = loader.getClassLoadingLock(name); assert lock1 == lock2; Without modifying the specification, or at least doing some creative wordsmithing on it, options 1 and 3 are precluded. Even then there are caveats, for example if findLoadedClass is not atomic with respect to defineClass, then you can have concurrent calls to findLoadedClass from different threads and that could be expensive (this is also an argument against moving findLoadedClass outside the locked region - it may speed up the common case where the class is already loaded, but the cost of re-executing after acquiring the lock could be prohibitive. Even option 2 might need some wordsmithing on the specification because the specification for getClassLoadingLock states "returns a dedicated object associated with the specified class name". The question is, what does "dedicated" mean here? Does it mean unique in the sense that the returned object is only associated with the given class in the current loader? Or can the object actually guard loading of multiple classes, possibly across different class loaders? So it seems that changing the specification will be inevitable if we wish to do something here. In which case lets go for something that more cleanly defines what we want to be doing: fully concurrent class-loading. Note: defineClassIfNotPresent is already implemented in the VM as find_or_define_class. It is only used if the AllowParallelDefineClass flag is set. This gives us an easy hook into existing VM mechanics. Proposal: Fully Concurrent ClassLoaders The proposal is that we expand on the notion of a parallel capable class loader and define a "fully concurrent parallel capable class loader" or fully concurrent loader, for short. A fully concurrent loader uses no synchronization in loadClass and the VM uses the "parallel define class" mechanism. For a fully concurrent loader getClassLoadingLock() can return null (or perhaps not - it doesn't matter as we won't use the result anyway). At present we have not made any changes to this method. All the parallel capable JDK classloaders become fully concurrent loaders. This doesn't require any code re-design as none of the mechanisms implemented rely on the per-name locking provided by the parallelLockMap. This seems to give us a path to remove all locking at the Java level during classloading, while retaining full compatibility with Java 7 parallel capable loaders. Fully concurrent loaders will still encounter the performance penalty associated with concurrent attempts to find and prepare a class's bytecode for definition by the VM. What this penalty is depends on the number of concurrent load attempts possible (a function of the number of threads and the application logic, and dependent on the number of processors), and the costs associated with finding and preparing the bytecodes. This obviously has to be measured across a range of applications. Preliminary webrevs: http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.hotspot/ http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.jdk/ Please direct all comments to the mailing list [email protected].

    Read the article

  • Why is a small fixed vocabulary seen as an advantage to RESTful services?

    - by Matt Esch
    So, a RESTful service has a fixed set of verbs in its vocabulary. A RESTful web service takes these from the HTTP methods. There are some supposed advantages to defining a fixed vocabulary, but I don't really grasp the point. Maybe someone can explain it. Why is a fixed vocabulary as outlined by REST better than dynamically defining a vocabulary for each state? For example, object oriented programming is a popular paradigm. RPC is described to define fixed interfaces, but I don't know why people assume that RPC is limited by these contraints. We could dynamically specify the interface just as a RESTful service dynamically describes its content structure. REST is supposed to be advantageous in that it can grow without extending the vocabulary. RESTful services grow dynamically by adding more resources. What's so wrong about extending a service by dynamically specifying a per-object vocabulary? Why don't we just use the methods that are defined on our objects as the vocabulary and have our services describe to the client what these methods are and whether or not they have side effects? Essentially I get the feeling that the description of a server side resource structure is equivalent to the definition of a vocabulary, but we are then forced to use the limited vocabulary in which to interact with these resources. Does a fixed vocabulary really decouple the concerns of the client from the concerns of the server? I surely have to be concerned with some configuration of the server, this is normally resource location in RESTful services. To complain at the use of a dynamic vocabulary seems unfair because we have to dynamically reason how to understand this configuration in some way anyway. A RESTful service describes the transitions you are able to make by identifying object structure through hypermedia. I just don't understand what makes a fixed vocabulary any better than any self-describing dynamic vocabulary, which could easily work very well in an RPC-like service. Is this just a poor reasoning for the limiting vocabulary of the HTTP protocol?

    Read the article

  • Gödel, Escher, Bach - Gödel's string

    - by Brad Urani
    In the book Gödel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter, the author gives us a representation of the precursor to Gödel's string (Gödel's string's uncle) as: ~Ea,a': (I don't have the book in front of me but I think that's right). All that remains to get Gödel's string is to plug the Gödel number for this string into the free variable a''. What I don't understand is how to get the Gödel number for the functions PROOF-PAIR and ARITHMOQUINE. I understand how to write these functions in a programming language like FlooP (from the book) and could even write them myself in C# or Java, but the scheme that Hofstadter defines for Gödel numbering only applies to TNT (which is just his own syntax for natural number theory) and I don't see any way to write a procedure in TNT since it doesn't have any loops, variable assignments etc. Am I missing the point? Perhaps Gödel's string is not something that can actually be printed, but rather a theoretical string that need not actually be defined? I thought it would be neat to write a computer program that actually prints Gödel's string, or Gödel's string encoded by Gödel numbering (yes, I realize it would have a gazillion digits) but it seems like doing so requires some kind of procedural language and a Gödel numbering system for that procedural language that isn't included in the book. Of course once you had that, you could write a program that plugs random numbers into variable "a" and run procedure PROOF-PAIR on it to test for theoromhood of Gödel's string. If you let it run for a trillion years you might find a derivation that proves Gödel's string.

    Read the article

  • Allow access to WordPress site only by links in email newsletters

    - by Shane
    I send out a personal email newsletter, and have been looking into sending it via some service like MailChimp, or sendy.co. Many of these email services suggest, or require, the email newsletter content to be available online, in case the recipient's email app doesn't render it properly, or at all. The thing is I don't want my newsletter contents visible to the whole world. Nor do I want to require existing recipients to make accounts/be assigned accounts, with passwords. So, the question is: How can my WordPress site content be viewable only by clicking on the link to it in the email newsletter. It can't be found in a Google search; but once at the site the visitor can view previous newsletter contents. It seems an .htaccess file would do the trick, but I have been unable to figure out the syntax for this. Thanks for your help. I have copied below two other questions, and answers, which have helped me word my question clearly. Similar to this request about allowing access to a certain group while still restricting access to the world: Is there a way to password protect directory only in cpanel. But the user should not be prompted the password, when they try to access it via web? This persons question is the closest I could find to my situation: Restrict direct folder access via .htaccess except via specific links

    Read the article

  • Write DAX queries in Report Builder #ssrs #dax #ssas #tabular

    - by Marco Russo (SQLBI)
    If you use Report Builder with Reporting Services, you can use DAX queries even if the editor for Analysis Services provider does not support DAX syntax. In fact, the DMX editor that you can use in Visual Studio editor of Reporting Services (see a previous post on that), is not available in Report Builder. However, as Sagar Salvi commented in this Microsoft Connect entry, you can use the DAX query text in the query of a Dataset by using the OLE DB provider instead of the Analysis Services one. I think it’s a good idea to show the steps required. First, create a DataSet using the OLE DB connection type, and provide the connection string the provider (Provider), the server name (Data Source) and the database name (Initial Catalog), such as: Provider=MSOLAP;Data Source=SERVERNAME\\TABULAR;Initial Catalog=AdventureWorks Tabular Model SQL 2012 Then, create a Dataset using the data source previously defined, select the Text query type, and write the DAX code in the Query pane: You can also use the Query Designer window, that doesn’t provide any particular help in writing the DAX query, but at least can show a preview of the result of the query execution. I hope DAX will get better editors in the future… in the meantime, remember you can use DAX Studio to write and test your DAX queries, and DAX Formatter to improve their readability!If you want to learn the DAX Query Language, I suggest you watching my video Data Analysis Expressions as a Query Language on Project Botticelli!

    Read the article

  • Interpreting Others' Source Code

    - by Maxpm
    Note: I am aware of this question. This question is a bit more specific and in-depth, however, focusing on reading the actual code rather than debugging it or asking the author. As a student in an introductory-level computer science class, my friends occasionally ask me to help them with their assignments. Programming is something I'm very proud of, so I'm always happy to oblige. However, I usually have difficulty interpreting their source code. Sometimes this is due to a strange or inconsistent style, sometimes it's due to strange design requirements specified in the assignment, and sometimes it's just due to my stupidity. In any case, I end up looking like an idiot staring at the screen for several minutes saying "Uh..." I usually check for the common errors first - missing semicolons or parentheses, using commas instead of extractor operators, etc. The trouble comes when that fails. I often can't step through with a debugger because it's a syntax error, and I often can't ask the author because he/she him/herself doesn't understand the design decisions. How do you typically read the source code of others? Do you read through the code from top-down, or do you follow each function as it's called? How do you know when to say "It's time to refactor?"

    Read the article

  • Ubuntu Froze Keyboard and mouse (laptop)

    - by fernando
    something similar to what happened to me was this post Updates kill Keyboard and mouse. unfortunately I'm stuck there. I also read on a couple other threads that I should go and use recovery mode, but when i select the option from GRUB it stops at a certain point, the screen that will allow me to fix packages won't appear. i decided to diagnose the computer, and test the RAM; so far everything seems to be going well. but this whole thing happened when I was doing an update around 230mb's... i still havent found a solution to the frozen Keyboard and mouse (trackpad). but if all else fails can i just reinstall Ubuntu? would that fix the issue? what else can I try? btw, I'm not not great with coding, so if there is anything that I need to type and put correct syntax or anything please guide me through it. I've had Ubuntu literally for 1 day, and this happens. any suggestions would be appreciated.

    Read the article

  • Netcat I/O enhancements

    - by user13277689
    When Netcat integrated into OpenSolaris it was already clear that there will be couple of enhancements needed. The biggest set of the changes made after Solaris 11 Express was released brings various I/O enhancements to netcat shipped with Solaris 11. Also, since Solaris 11, the netcat package is installed by default in all distribution forms (live CD, text install, ...). Now, let's take a look at the new functionality: /usr/bin/netcat alternative program name (symlink) -b bufsize I/O buffer size -E use exclusive bind for the listening socket -e program program to execute -F no network close upon EOF on stdin -i timeout extension of timeout specification -L timeout linger on close timeout -l -p port addr previously not allowed usage -m byte_count Quit after receiving byte_count bytes -N file pattern for UDP scanning -I bufsize size of input socket buffer -O bufsize size of output socket buffer -R redir_spec port redirection addr/port[/{tcp,udp}] syntax of redir_spec -Z bypass zone boundaries -q timeout timeout after EOF on stdin Obviously, the Swiss army knife of networking tools just got a bit thicker. While by themselves the options are pretty self explanatory, their combination together with other options, context of use or boundary values of option arguments make it possible to construct small but powerful tools. For example: the port redirector allows to convert TCP stream to UDP datagrams. the buffer size specification makes it possible to send one byte TCP segments or to produce IP fragments easily. the socket linger option can be used to produce TCP RST segments by setting the timeout to 0 execute option makes it possible to simulate TCP/UDP servers or clients with shell/python/Perl/whatever script etc. If you find some other helpful ways use please share via comments. Manual page nc(1) contains more details, along with examples on how to use some of these new options.

    Read the article

  • How to achieve a loosely coupled REST API but with a defined and well understood contract?

    - by BestPractices
    I am new to REST and am struggling to understand how one would properly design a REST system to both allow for loose coupling but at the same time allow a consumer of a REST API to understand the API. If, in my client code, I issue a GET request for a resource and get back XML, how do I know what to do with that xml? e.g. if it contains <fname>John</fname><lname>Smith</lname> how do I know that these refer to the concept of "first name", "last name"? Is it up to the person writing the REST API to define in documentation some place what each of the XML fields mean? What if producer of the API wants to change the implementation to be <firstname> instead of <fname>? How do they do this and notify their consumers that this change occurred? Or do the consumers just encounter the error and then look at the payload and figure out on their own that it changed? I've read in REST in Practice that using a WADL tool to create a client implementation based on the WADL (and hide the fact that you're doing a distributed call) is an "anti-pattern". But I was planning to do this-- at least then I would have a statically typed API call that, if it changed, I would know at compile time and not at run time. Why is this a bad thing to generate client code based on a WADL? And how do I know what to do with the links that returned in the response of a POST to a REST API? What defines this contract and gives true meaning to what each link will do? Please help! I dont understand how to go from statically-typed or even SOAP/RPC to REST!

    Read the article

  • how a pure functional programming language manage without assignment statements?

    - by Gnijuohz
    When reading the famous SICP,I found the authors seem rather reluctant to introduce the assignment statement to Scheme in Chapter 3.I read the text and kind of understand why they feel so. As Scheme is the first functional programming language I ever know something about,I am kind of surprised that there are some functional programming languages(not Scheme of course) can do without assignments. Let use the example the book offers,the bank account example.If there is no assignment statement,how can this be done?How to change the balance variable?I ask so because I know there are some so-called pure functional languages out there and according to the Turing complete theory,this must can be done too. I learned C,Java,Python and use assignments a lot in every program I wrote.So it's really an eye-opening experience.I really hope someone can briefly explain how assignments are avoided in those functional programming languages and what profound impact(if any) it has on these languages. The example mentioned above is here: (define (make-withdraw balance) (lambda (amount) (if (>= balance amount) (begin (set! balance (- balance amount)) balance) "Insufficient funds"))) This changed the balance by set!.To me it looks a lot like a class method to change the class member balance. As I said,I am not familiar with functional programming languages,so if I said something wrong about them,feel free to point out.

    Read the article

  • Best practices for using namespaces in C++.

    - by Dima
    I have read Uncle Bob's Clean Code a few months ago, and it has had a profound impact on the way I write code. Even if it seemed like he was repeating things that every programmer should know, putting them all together and putting them into practice does result in much cleaner code. In particular, I found breaking up large functions into many tiny functions, and breaking up large classes into many tiny classes to be incredibly useful. Now for the question. The book's examples are all in Java, while I have been working in C++ for the past several years. How would the ideas in Clean Code extend to the use of namespaces, which do not exist in Java? (Yes, I know about the Java packages, but it is not really the same.) Does it make sense to apply the idea of creating many tiny entities, each with a clearly define responsibility, to namespaces? Should a small group of related classes always be wrapped in a namespace? Is this the way to manage the complexity of having lots of tiny classes, or would the cost of managing lots of namespaces be prohibitive?

    Read the article

  • Testcase runner for parametrized testcases

    - by Razer
    Let me explain my situation. I'm planning a kind of test case runner for doing testcases on external devices, which are microcontroller based. Lets consider the devices: Device 1 Device 2 There exist a lot of test cases which can be run with one of the devices above. For example: Testcase 1 Testcase 2 The main reason that all the testcases can be run with any device is, that the testcases validates some standard and this software should be extensible for future devices. The testcases itself must be runnable with changing parameters. For example Testcase 1 does some Timing Verification the testcase needs as input parameter the datarate: 4800, 9600, 19200. Now hoping you understand the situation, let me explain my design questions. For implementing the test cases I thought about an Attribute based approach, like nunit does it. The more complicated problem is, how to define the parametrized testcases? Like this: Device 1: Testcase 1: datarate: 4800, 9600, 19200 Testcase 2: supply: 1, 2, 3 Device 2: Testcase 1: datarate: 9600, 19200, 38400 Testcase 2: supply: 3, 4, 5 How would you design such a framework? I've done a similar desin in python where I had for every device a XML containing the testcase definitions like: <Testcase="Testcase 1" datarate=4800/> <Testcase="Testcase 1" datarate=9600/> <Testcase="Testcase 1" datarate=19200/>

    Read the article

  • Why does Eclipse keep crashing and dpkg erroring out?

    - by Sanju Sony Kurian
    I AM GETTING THE ERROR E: Sub-process /usr/bin/dpkg returned an error code (1) WHILE I WAS TRYING TO UPGRADE MY PC. i USE ECLIPSE AND IT KEEPS ON CRASHING... pLEASE HELP I ATTACH THE ERROR sanju@sanju-Dell-System-XPS-L502X:~$ upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: aisleriot gir1.2-rb-3.0 gir1.2-totem-1.0 gnome-disk-utility gnome-keyring libgck-1-0 libgcr-3-1 libtotem0 linux-generic linux-headers-generic linux-image-generic rhythmbox rhythmbox-data rhythmbox-mozilla rhythmbox-plugin-cdrecorder rhythmbox-plugin-magnatune rhythmbox-plugin-zeitgeist rhythmbox-plugins seahorse totem totem-common totem-mozilla totem-plugins 0 upgraded, 0 newly installed, 0 to remove and 23 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y Setting up grub-pc (1.99-21ubuntu3.1) ... /var/lib/dpkg/info/grub-pc.config: 35: /etc/default/grub: Syntax error: EOF in backquote substitution dpkg: error processing grub-pc (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: grub-pc E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • XNA calculate normals for linesegment

    - by Gerhman
    I am quite new to 3D graphical programming and thus far only understand that normal somehow define the direction in which a vertex faces and therefore the direction in which light is reflected. I have now idea how they are calculated though, only that they are defined by a Vector3. For a visualizer that I am creating I am importing a bunch of coordinate which represent layer upon layer of line segments. At the moment I am only using a vertex buffer and adding the start and end point of each line and then rendering a linelist. The thing is now that I need to calculate the normal for the vertices of these line segments so that I can get some realistic lighting. I have no idea how to calculate these normal but I know they all face sideways and not up or down. To calculate them all I have are the start and end positions of each line segment. The below image is a representation of what I think I need to do in the case of an example layer: The red arrows represent the normal that should be calculates, the blue text represent the coordinates of the vertices and the green numbers represent their indices. I would greatly appreciate it if someone could please explain to me how I should calculate these normal.

    Read the article

  • Java Spotlight Episode 88: HTML 5 and JavaFX 2 with Gerrit Grunwalt

    - by Roger Brinkley
    Interview with Gerrit Grundwalt on HTML 5 and JavaFX 2. Joining us this week on the Java All Star Developer Panel is Arun Gupta, Java EE Guy. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News Java FX 2.1.1 Documentation updated on the docs.oracle.com/javafx website. Lightview: JavaFX 2 real-time visualizer for Glassfish JavaFX Programmatic POJO Expression Bindings (Part 1 & 2) The Enterprise Side of JavaFX - Leverage the power of FX Markup Language to define the UI for enterprise applications Events June 26-28, Jazoon, Zurich, Switzerland Jun 27, Houston JUG July 5, Java Forum, Stuttgart, Germany Jul 13-14, IndicThreads, Delhi July 30-August 1, JVM Language Summit, Santa Clara Feature InterviewGerrit Grunwald is working as a software engineer at Canoo Engineering AG (Basel, Switzerland). He is responsible for visualizations of all kinds. His technical interests include Java desktop development and specifically the subareas - JavaFX, Java Swing and HTML5 controls.He's a decent frequent blogger (http://www.harmonic-code.org), founder and leader of the Java User Group in Muenster (Germany), where he's also living. He has been involved in the IT industry since 1996, when he began to study physics at the University of Applied Sciences Muenster (Germany). Mail Bag What’s Cool Tab Sweep

    Read the article

  • Microsoft and innovation: IIF() method

    This Saturday I was watching a couple of eLearning videos from TrainSignal (thanks to the subscription I have with Pluralsight) on Querying Microsoft SQL Server 2012 (exam 70-461). 'Innovation' by Microsoft I kept myself busy learning 'new' things about Microsoft SQL Server 2012 and some best practices. It was incredible 'innovative' to see that there is an additional logic function called IIF() available now: Returns one of two values depending on the value of a logical expression. IIF(lExpression, eExpression1, eExpression2) Ups, my bad... That's actually taken from the syntax page of Visual FoxPro 9.0 SP 2. And tada, at least seven (7+) years later, there's the recent IIF() Transact-SQL version of that function: Returns one of two values, depending on whether the Boolean expression evaluates to true or false in SQL Server 2012. IIF ( boolean_expression, true_value, false_value ) Now, that's what I call innovation! But we all know what happened to Visual FoxPro... It has been reincarnated in form of Visual Studio LightSwitch (and SQL Server). Enough ranting... Happy coding!

    Read the article

  • Oracle Advanced Benefits: Plan Design Maintenance for Open Enrollment

    - by Annemarie Provisero
    ADVISOR WEBCAST: Oracle Advanced Benefits: Plan Design Maintenance for Open Enrollment PRODUCT FAMILY: Oracle HCM - Benefits  July 13, 2011 at 1 pm PT, 2 pm MT, 4 pm ET This session AU gives you the information to define new and maintain all Compensation Objects used in your Benefits setup. Course highlights things to consider when getting ready for Open Enrollment or when there is a need to change compensation objects. We will review creating a new or ending an old program, plan, or option. We also review what to do when you need to move from an Unrestricted program to a Restricted one. TOPICS WILL INCLUDE: Adding or Modifying Compensation Objects Ending Compensation Objects Elements and Element Links Standard and Variable Rates Dependents and Beneficiaries Moving from Oracle Standard Benefits to Oracle Advanced Benefits A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • ASMLib

    - by wcoekaer
    Oracle ASMlib on Linux has been a topic of discussion a number of times since it was released way back when in 2004. There is a lot of confusion around it and certainly a lot of misinformation out there for no good reason. Let me try to give a bit of history around Oracle ASMLib. Oracle ASMLib was introduced at the time Oracle released Oracle Database 10g R1. 10gR1 introduced a very cool important new features called Oracle ASM (Automatic Storage Management). A very simplistic description would be that this is a very sophisticated volume manager for Oracle data. Give your devices directly to the ASM instance and we manage the storage for you, clustered, highly available, redundant, performance, etc, etc... We recommend using Oracle ASM for all database deployments, single instance or clustered (RAC). The ASM instance manages the storage and every Oracle server process opens and operates on the storage devices like it would open and operate on regular datafiles or raw devices. So by default since 10gR1 up to today, we do not interact differently with ASM managed block devices than we did before with a datafile being mapped to a raw device. All of this is without ASMLib, so ignore that one for now. Standard Oracle on any platform that we support (Linux, Windows, Solaris, AIX, ...) does it the exact same way. You start an ASM instance, it handles storage management, all the database instances use and open that storage and read/write from/to it. There are no extra pieces of software needed, including on Linux. ASM is fully functional and selfcontained without any other components. In order for the admin to provide a raw device to ASM or to the database, it has to have persistent device naming. If you booted up a server where a raw disk was named /dev/sdf and you give it to ASM (or even just creating a tablespace without asm on that device with datafile '/dev/sdf') and next time you boot up and that device is now /dev/sdg, you end up with an error. Just like you can't just change datafile names, you can't change device filenames without telling the database, or ASM. persistent device naming on Linux, especially back in those days ways to say it bluntly, a nightmare. In fact there were a number of issues (dating back to 2004) : Linux async IO wasn't pretty persistent device naming including permissions (had to be owned by oracle and the dba group) was very, very difficult to manage system resource usage in terms of open file descriptors So given the above, we tried to find a way to make this easier on the admins, in many ways, similar to why we started working on OCFS a few years earlier - how can we make life easier for the admins on Linux. A feature of Oracle ASM is the ability for third parties to write an extension using what's called ASMLib. It is possible for any third party OS or storage vendor to write a library using a specific Oracle defined interface that gets used by the ASM instance and by the database instance when available. This interface offered 2 components : Define an IO interface - allow any IO to the devices to go through ASMLib Define device discovery - implement an external way of discovering, labeling devices to provide to ASM and the Oracle database instance This is similar to a library that a number of companies have implemented over many years called libODM (Oracle Disk Manager). ODM was specified many years before we introduced ASM and allowed third party vendors to implement their own IO routines so that the database would use this library if installed and make use of the library open/read/write/close,.. routines instead of the standard OS interfaces. PolyServe back in the day used this to optimize their storage solution, Veritas used (and I believe still uses) this for their filesystem. It basically allowed, in particular, filesystem vendors to write libraries that could optimize access to their storage or filesystem.. so ASMLib was not something new, it was basically based on the same model. You have libodm for just database access, you have libasm for asm/database access. Since this library interface existed, we decided to do a reference implementation on Linux. We wrote an ASMLib for Linux that could be used on any Linux platform and other vendors could see how this worked and potentially implement their own solution. As I mentioned earlier, ASMLib and ODMLib are libraries for third party extensions. ASMLib for Linux, since it was a reference implementation implemented both interfaces, the storage discovery part and the IO part. There are 2 components : Oracle ASMLib - the userspace library with config tools (a shared object and some scripts) oracleasm.ko - a kernel module that implements the asm device for /dev/oracleasm/* The userspace library is a binary-only module since it links with and contains Oracle header files but is generic, we only have one asm library for the various Linux platforms. This library is opened by Oracle ASM and by Oracle database processes and this library interacts with the OS through the asm device (/dev/asm). It can install on Oracle Linux, on SuSE SLES, on Red Hat RHEL,.. The library itself doesn't actually care much about the OS version, the kernel module and device cares. The support tools are simple scripts that allow the admin to label devices and scan for disks and devices. This way you can say create an ASM disk label foo on, currently /dev/sdf... So if /dev/sdf disappears and next time is /dev/sdg, we just scan for the label foo and we discover it as /dev/sdg and life goes on without any worry. Also, when the database needs access to the device, we don't have to worry about file permissions or anything it will be taken care of. So it's a convenience thing. The kernel module oracleasm.ko is a Linux kernel module/device driver. It implements a device /dev/oracleasm/* and any and all IO goes through ASMLib - /dev/oracleasm. This kernel module is obviously a very specific Oracle related device driver but it was released under the GPL v2 so anyone could easily build it for their Linux distribution kernels. Advantages for using ASMLib : A good async IO interface for the database, the entire IO interface is based on an optimal ASYNC model for performance A single file descriptor per Oracle process, not one per device or datafile per process reducing # of open filehandles overhead Device scanning and labeling built-in so you do not have to worry about messing with udev or devlabel, permissions or the likes which can be very complex and error prone. Just like with OCFS and OCFS2, each kernel version (major or minor) has to get a new version of the device drivers. We started out building the oracleasm kernel module rpms for many distributions, SLES (in fact in the early days still even for this thing called United Linux) and RHEL. The driver didn't make sense to get pushed into upstream Linux because it's unique and specific to the Oracle database. As it takes a huge effort in terms of build infrastructure and QA and release management to build kernel modules for every architecture, every linux distribution and every major and minor version we worked with the vendors to get them to add this tiny kernel module to their infrastructure. (60k source code file). The folks at SuSE understood this was good for them and their customers and us and added it to SLES. So every build coming from SuSE for SLES contains the oracleasm.ko module. We weren't as successful with other vendors so for quite some time we continued to build it for RHEL and of course as we introduced Oracle Linux end of 2006 also for Oracle Linux. With Oracle Linux it became easy for us because we just added the code to our build system and as we churned out Oracle Linux kernels whether it was for a public release or for customers that needed a one off fix where they also used asmlib, we didn't have to do any extra work it was just all nicely integrated. With the introduction of Oracle Linux's Unbreakable Enterprise Kernel and our interest in being able to exploit ASMLib more, we started working on a very exciting project called Data Integrity. Oracle (Martin Petersen in particular) worked for many years with the T10 standards committee and storage vendors and implemented Linux kernel support for DIF/DIX, data protection in the Linux kernel, note to those that wonder, yes it's all in mainline Linux and under the GPL. This basically gave us all the features in the Linux kernel to checksum a data block, send it to the storage adapter, which can then validate that block and checksum in firmware before it sends it over the wire to the storage array, which can then do another checksum and to the actual DISK which does a final validation before writing the block to the physical media. So what was missing was the ability for a userspace application (read: Oracle RDBMS) to write a block which then has a checksum and validation all the way down to the disk. application to disk. Because we have ASMLib we had an entry into the Linux kernel and Martin added support in ASMLib (kernel driver + userspace) for this functionality. Now, this is all based on relatively current Linux kernels, the oracleasm kernel module depends on the main kernel to have support for it so we can make use of it. Thanks to UEK and us having the ability to ship a more modern, current version of the Linux kernel we were able to introduce this feature into ASMLib for Linux from Oracle. This combined with the fact that we build the asm kernel module when we build every single UEK kernel allowed us to continue improving ASMLib and provide it to our customers. So today, we (Oracle) provide Oracle ASMLib for Oracle Linux and in particular on the Unbreakable Enterprise Kernel. We did the build/testing/delivery of ASMLib for RHEL until RHEL5 but since RHEL6 decided that it was too much effort for us to also maintain all the build and test environments for RHEL and we did not have the ability to use the latest kernel features to introduce the Data Integrity features and we didn't want to end up with multiple versions of asmlib as maintained by us. SuSE SLES still builds and comes with the oracleasm module and they do all the work and RHAT it certainly welcome to do the same. They don't have to rebuild the userspace library, it's really about the kernel module. And finally to re-iterate a few important things : Oracle ASM does not in any way require ASMLib to function completely. ASMlib is a small set of extensions, in particular to make device management easier but there are no extra features exposed through Oracle ASM with ASMLib enabled or disabled. Often customers confuse ASMLib with ASM. again, ASM exists on every Oracle supported OS and on every supported Linux OS, SLES, RHEL, OL withoutASMLib Oracle ASMLib userspace is available for OTN and the kernel module is shipped along with OL/UEK for every build and by SuSE for SLES for every of their builds ASMLib kernel module was built by us for RHEL4 and RHEL5 but we do not build it for RHEL6, nor for the OL6 RHCK kernel. Only for UEK ASMLib for Linux is/was a reference implementation for any third party vendor to be able to offer, if they want to, their own version for their own OS or storage ASMLib as provided by Oracle for Linux continues to be enhanced and evolve and for the kernel module we use UEK as the base OS kernel hope this helps.

    Read the article

  • Best practice with pyGTK and Builder XML files

    - by Phoenix87
    I usually design GUI with Glade, thus producing a series of Builder XML files (one such file for each application window). Now my idea is to define a class, e.g. MainWindow, that inherits from gtk.Window and that implements all the signal handlers for the application main window. The problem is that when I retrieve the main window from the containing XML file, it is returned as a gtk.Window instance. The solution I have adopted so far is the following: I have defined a class "Window" in the following way class Window(): def __init__(self, win_name): builder = gtk.Builder() self.builder = builder builder.add_from_file("%s.glade" % win_name) self.window = builder.get_object(win_name) builder.connect_signals(self) def run(self): return self.window.run() def show_all(self): return self.window.show_all() def destroy(self): return self.window.destroy() def child(self, name): return self.builder.get_object(name) In the actual application code I have then defined a new class, say MainWindow, that inherits frow Window, and that looks like class Main(Window): def __init__(self): Window.__init__(self, "main") ### Signal handlers ##################################################### def on_mnu_file_quit_activated(self, widget, data = None): ... The string "main" refers to the main window, called "main", which resides into the XML Builder file "main.glade" (this is a sort of convention I decided to adopt). So the question is: how can I inherit from gtk.Window directly, by defining, say, the class Foo(gtk.Window), and recast the return value of builder.get_object(win_name) to Foo?

    Read the article

  • does ubuntu 11.10 is support ns2.29

    - by nasser
    I need your help, I'm a beginner on NS2 , and I'm trying to install ns2.29 on ubuntu 11.10 32bits but i can't. This message appear and installation stopped : Build tcl8.4.11 ============================================================ loading cache ./config.cache checking whether to use symlinks for manpages... no checking whether to compress the manpages... no checking whether to add a package name suffix for the manpages... no checking for gcc... gcc checking whether the C compiler (gcc ) works... yes checking whether the C compiler (gcc ) is a cross-compiler... no checking whether we are using GNU C... yes checking whether gcc accepts -g... yes checking for building with threads... no (default) checking if the compiler understands -pipe... yes checking how to run the C preprocessor... gcc -pipe -E checking for sin... no checking for main in -lieee... yes checking for main in -linet... no checking for net/errno.h... no checking for connect... yes checking for gethostbyname... yes checking how to build libraries... static checking for ranlib... ranlib checking if 64bit support is requested... no checking if 64bit Sparc VIS support is requested... no checking system version (for dynamic loading)... ./configure: 1: Syntax error: Unterminated quoted string tcl8.3.2 configuration failed! Exiting ... Tcl is not part of the ns project. Please see www.Scriptics.com to see if they have a fix for your platform. Anyone can help me?

    Read the article

  • SFML - Moving a sprite on mouseclick

    - by Mike
    I want to be able to move a sprite from a current location to another based upon where the user clicks in the window. This is the code that I have: #include <SFML/Graphics.hpp> int main() { // Create the main window sf::RenderWindow App(sf::VideoMode(800, 600), "SFML window"); // Load a sprite to display sf::Texture Image; if (!Image.LoadFromFile("cb.bmp")) return EXIT_FAILURE; sf::Sprite Sprite(Image); // Define the spead of the sprite float spriteSpeed = 200.f; // Start the game loop while (App.IsOpened()) { if (sf::Keyboard::IsKeyPressed(sf::Keyboard::Escape)) App.Close(); if (sf::Mouse::IsButtonPressed(sf::Mouse::Right)) { Sprite.SetPosition(sf::Mouse::GetPosition(App).x, sf::Mouse::GetPosition(App).y); } // Clear screen App.Clear(); // Draw the sprite App.Draw(Sprite); // Update the window App.Display(); } return EXIT_SUCCESS; } But instead of just setting the position I want to use Sprite.Move() and gradually move the sprite from one position to another. The question is how? Later I plan on adding a node system into each map so I can use Dijkstra's algorithm, but I'll still need this for moving between nodes.

    Read the article

  • Huge procedurally generated 'wilderness' worlds

    - by The Communist Duck
    I'm sure you all know of games like Dwarf Fortress - massive, procedural generated wilderness and land. Something like this, taken from this very useful article. However, I was wondering how I could apply this to a much larger scale; the scale of Minecraft comes to mind (isn't that something like 8x the size of the Earth's surface?). Pseudo-infinite, I think the best term would be. The article talks about fractal perlin noise. I am no way an expert on it, but I get the general idea (it's some kind of randomly generated noise which is semi-coherent, so not just random pixel values). I could just define regions X by X in size, add some region loading type stuff, and have one bit of noise generating a region. But this would result in just huge amounts of islands. On the other extreme, I don't think I can really generate a supermassive sheet of perlin noise. And it would just be one big island, I think. I am pretty sure Perlin noise, or some noise, would be the answer in some way. I mean, the map is really nice looking. And you could replace the ascii with tiles, and get something very nice looking. Anyone have any ideas? Thanks. :D

    Read the article

  • How to properly learn ASP.NET MVC

    - by Qmal
    Hello everyone, I have a question to ask and maybe some of you will think it's lame, but I hope someone will get me on the right track. So I've been programming for quite some time now. I started programming when I was about 13 or so on Delphi, but when I was about 17 or so I switched to C# and now I really like to program with it, mostly because it's syntax is very appealing to me, plus managed code is very good. So it all was good and fun but then I had some job openings that I of course took, but the problem with them is that they all are about web programming. And I had to learn PHP and MVC fundamentals. And I somewhat did while building applications using CI and Kohana framework. But I want to build websites using ASP.NET because I like C# much, much more than PHP. TL;DR I want to know ASP.NET MVC but I don't know where to start. What I want to start with is build some simple like CMS. But I don't know where to start. Do I use same logic as PHP? What do I use for DB connections? And also, if I plan to host something that is build with ASP.NET MVC3 on a hosting provider do I need to buy some kind of license?

    Read the article

  • Help me select a "Simpler" target to create a new language: .NET, LLVM, Go, Own VM

    - by mamcx
    Lets define "Simple". This is my first language. I have no previous experience I will not dedicate +4 years to learn it properly. I'm a professional software [developer], but as an amateur in this area, I want instant gratification. If the idea shows a future, I could rewrite it. I don't want to do everything from scratch. In fact, if there exists a way to get GO (for example), change its syntax, add some sugar, give some extra functions and leave intact everything else, that would be perfect! From the example of coffescript/scala I think is better to build on top of some rich runtime like .NET/GO so I don't need to rewrite everything. HOWEVER, if is better other way, no problem for the first try! I want it in a week. I need it in a week so it will really take a month. Then it truly takes 3 months. But I don't want to put more that 3 months on this. I could reduce the scope of my language, but I hope the tools will help me a lot... I want to build a new language. Similar to python, but typed. I wonder what to build it on top of. I like the idea of building on top of GO. To get their sane (IMHO) OO paradigm (I plan to do the same, using interfaces, not inheritance), get goroutines and some other stuff. In my naive thinking I imagine that spit another language could help me to debug it more easily. However, look like everyone is building on top of something like .NET (don't like Java), LLVM or make it own VM. I read http://createyourproglang.com/ (great!) and the part of the VM look "easy" to me. So, what I need is the proper criteria and question I need to know in advance to have a fair shot at make this.

    Read the article

  • SEO + international sites? country.domain.com or domain.country?

    - by Pure.Krome
    Hi folks, is it better to have seperate country specific domains (which costs more money) or subdomains which define the country, for better SEO? eg. stackoverflow.com stackoverflow.com.au stackoverflow.co.uk vs stackoverflow.com au.stackoverflow.com uk.stackoverflow.com Assumption: int the search engine web master tools, each subdomain are associated to a country. eg. au.stackoverflow.com is associated to the country Australia. cheers! Update I understand that both methods do work, especially when i utilize the assumption, listed above. The question is about: Which method is better? Is there such a small SEO difference between them? Is the first method way way way better than the second with getting better SEO results? Update #2 A number of folks have suggested that the following is a good/better approach: stackoverflow.com/ stackoverflow.com/au stackoverflow.com/uk By adding a country specific iso code to the end of the url/the first folder of the domain can be recognised as the country. But a number of SEO mates have suggested that this is a valuable waste of folder level space. Er.. how can i explain. Ok, it's been suggested by some SEO experts that if the number of levels or folders in the domain exceeds 5 then the page drops dramatically in importance. Basically, you don't want to make it deep. As such, adding the country as the first level can be considered a waste, especially when it can be handled by the domain OR subdomain - hence the question :) So, any more thoughts on this? (Maybe SO is the wrong place to ask this question?)

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >