Search Results

Search found 2896 results on 116 pages for 'comparison operators'.

Page 78/116 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • eSTEP Newsletter November 2011 now available

    - by uwes
    Dear Partners,We would like to inform you that the November issue of our Newsletter is now available.The issue contains informations to the following topics:Notes from Corporate: Magic Quadrant for Enterprise Application Servers, Oracle Buys RightNow Technical Corner: Oracle Solaris 11 – The First Cloud OS, Oracle Solaris 10 8/11 now available, New RAC/Containers certifications, DTrace and Container for Oracle Linux, Oracle Enterprise Manager Ops Center released, News from the Oracle Solaris Cluster, SPARC - New roadmap, T-Series Benchmarks Learning & Events: eSTEP Events Schedule, Recently Delivered TechCasts, Delivered Campaigns in 2011 How to ...: About Oracle Solaris Containers, Detailed feature comparison between the different versions of database 11g, Upgrade Advantage Program + table with examples, Sun Software Name ===> New Oracle Name, Oracle Linux and OVM Certification Search, TO YOUR ATTENTION - Repricing Servers and Xoptions You find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • GUI based backup utility [closed]

    - by Chethan S.
    Possible Duplicate: Comparison of backup tools I have read favorable reviews for 'Back In Time' for the purpose stated above. Still I am posting this question as I have some demands in my mind. Few years back I was using ThinkVantage Rescue and Recovery by IBM on my Lenovo PC under Windows. That provided me nice features like compressed backups, boot time options - OS Repair, Restore entire OS, restore entire system to an older date, restore individual files etc. Out of these the feature I liked the most was compressed backups. Similar features are available in software like Norton Ghost too. In Back In Time I was surprised to see that the snapshot takes up same amount of space as that of the original contents, no compression at all. Furthermore, I was not able to find options to change the compression ratio etc. under settings. According to me compression of backups is a must have feature. Therefore, can anyone suggest me any other utility which can serve the purpose. I insist on GUI based tool since I don't want to mess up with backups!

    Read the article

  • Is there any kind of established architecture for browser based MMO games?

    - by black_puppydog
    I am beginning the development of a broser based game in which players take certain actions at any point in time. Big parts of gameplay will be happening in real life and just have to be entered into the system. I believe a good kind of comparison might be a platform for managing fantasy football, although I have virtually no experience playing that, so please correct me if I am mistaken here. The point is that some events happen in the program (i.e. on the server, out of reach for the players) like pulling new results from some datasource, starting of a new round by a game master and such. Other events happen in real life (two players closing a deal on the transfer of some team member or whatnot - again: have never played fantasy football) and have to be entered into the system. The first part is pretty easy since the game masters will be "staff" and thus can be trusted to a certain degree to not mess with the system. But the second part bothers me quite a lot, especially since the actions may involve multiple steps and interactions with different players, like registering a deal with the system that then has to be approved by the other party or denied and passed on to a game master to decide. I would of course like to separate the game logic as far as possible from the presentation and basic form validation but am unsure how to do this in a clean fashion. Of course I could (and will) put some effort into making my own architectural decisions and prototype different ideas. But I am bound to make some stupid mistakes at some point, so I would like to avoid some of that by getting a little "book smart" beforehand. So the question is: Is there any kind of architectural works that I can read up on? Papers, blogs, maybe design documents or even source code? Writing this down this seems more like a business application with business rules, workflows and such... Any good entry points for that?

    Read the article

  • Distance between hyperplanes

    - by michael dillard
    I'm trying to teach myself some machine learning, and have been using the MNIST database (http://yann.lecun.com/exdb/mnist/) do so. The author of that site wrote a paper in '98 on all different kinds of handwriting recognition techniques, available at http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf. The 10th method mentioned is a "Tangent Distance Classifier". The idea being that if you place each image in a (NxM)-dimensional vector space, you can compute the distance between two images as the distance between the hyperplanes formed by each where the hyperplane is given by taking the point, and rotating the image, rescaling the image, translating the image, etc. I can't figure out enough to fill in the missing details. I understand that most of these are indeed linear operators, so how does one use that fact to then create the hyperplane? And once we have a hyperplane, how do we take its distance with other hyperplanes?

    Read the article

  • ASP.Net navigation tabs like windows tab control

    - by devphil
    I would like to have a webpage something like windows tab control. Each webpage does not lose the contents and data while moving between pages, postbacks, etc. Here is the website design and my idea: [Master Page] "Fruits" "Cars" "Animals" "Operators" clicking on "Fruits" will forwards to "Fruits" page, and the same for other links (tabs) The user works on "Fruits" page searching fruits, fill up some fields, etc. The user then moves to "Cars" page and then builds up his own car by filling some fields, etc and then the user goes back to "Fruits" page again - the user sees the same page where she/he left on "Fruits" page. Please suggest some good ways other than using javascript:history.go(-1). Is this possible to implement?

    Read the article

  • Conting of objects created in stack and heap for many classes

    - by viswanathan
    What is the best way to count the total number of objects created in both stack and heap for different classes. I know that in C++ new and delete operators can be overloaded and hence in the default constructor and destructor the object count can be incremented or decremented as and when the objects get created or destroyed. Further if i am to extend the same thing for object counting of objects of different classes, then i can create a dummy class and write the object count code in that class and then when i create any new class i can derive it from the Dummy class. Is there any other optimal solution to the same problem.

    Read the article

  • Nullable types and ?? operator C# [en-US]

    - by ruimachado
    Nullable types vs Non-nullable types   While developing our C# projects its frequent the null comparison operation to avoid null exceptions. This simple operation is mainly coded using the "var x = null" code example inside an if clause. However not all types of variables are nullable, which means that setting a variable to null is not allowed in every cases, it depends on what kind of type are you defining. But what if there was an extension to your non-nullable type that would convert your variable types to nullable? This extension really exists. As I said before in C# you have nullable types which represent all the values of an underlying type, and an additional null value and can be declared easily using "T?", where T is the type of the variable and for example the normal int type cannot be null, so its a non-nullable type, however if you define a "int?" your variable can be null, what you do is convert a non-nullable type to a nullable type. Example: int x=null;     Not allowed     int? x=null;   Allowed     While using nullable types you can check if a variable is null the same way you do it with nullable types:     But what about setting a default value when a certain variable is null?   In this cases the c# .net framework let you set a default value when you try to assign a nullable type to a non-nullable type, using the ?? operator. If you don't use this operator you can still catch the InvalidOperationException which is throw in this cases. For example  without the ?? operator :     Using the ?? operator your code becomes cleaner and more easy to read and you get a bonus, you can set a default value for multiple variables using the ?? in a chain set.     That’s it,   Thanks, Rui Machado rpmachado.wordpress.com

    Read the article

  • How do I cluster strings based on a relation between two strings?

    - by Tom Wijsman
    If you don't know WEKA, you can try a theoretical answer. I don't need literal code/examples... I have a huge data set of strings in which I want to cluster the strings to find the most related ones, these could as well be seen as duplicate. I already have a set of couples of string for which I know that they are duplicate to each other, so, now I want to do some data mining on those two sets. The result I'm looking for is a system that would return me the possible most relevant couples of strings for which we don't know yet that they are duplicates, I believe that I need clustering for this, which type? Note that I want to base myself on word occurrence comparison, not on interpretation or meaning. Here is an example of two string of which we know they are duplicate (in our vision on them): The weather is really cold and it is raining. It is raining and the weather is really cold. Now, the following strings also exist (most to least relevant, ignoring stop words): Is the weather really that cold today? Rainy days are awful. I see the sunshine outside. The software would return the following two strings as most relevant, which aren't known to be duplicate: The weather is really cold and it is raining. Is the weather really that cold today? Then, I would mark that as duplicate or not duplicate and it would present me with another couple. How do I go to implement this in the most efficient way that I can apply to a large data set?

    Read the article

  • rails: has_many :through + polymorphism validation?

    - by ramonrails
    I am trying to achieve this. Any hints? A project has many users through join model A user has many projects through join model Admin class inherits User class. It also has some Admin specific stuff. Admin like inheritance for Supervisor and Operator Project has one Admin, One supervisor and many operators. Now I want to 1. submit data for project, admin, supervisor and operator in a single project form 2. validate all and show errors on the project form. Project has_many :users, :through = :projects_users User has_many :projects, :through = :projects_users ProjectsUser = :id integer, :user_id :integer, :project_id :integer, :user_type :string ProjectUser belongs_to :project, belongs_to :user, :polymorphic = true Admin < User Supervisor < User Operator < User Is the approach correct? Any and all suggestions are welcome.

    Read the article

  • algorithm to combine data for linear fit?

    - by BoldlyBold
    I'm not sure if this is the best place to ask this, but you guys have been helpful with plenty of my CS homework in the past so I figure I'll give it a shot. I'm looking for an algorithm to blindly combine several dependent variables into an index that produces the best linear fit with an external variable. Basically, it would combine the dependent variables using different mathematical operators, include or not include each one, etc. until an index is developed that best correlates with my external variable. Has anyone seen/heard of something like this before? Even if you could point me in the right direction or to the right place to ask, I would appreciate it. Thanks.

    Read the article

  • Booting off a ZFS root in 14.04

    - by RJVB
    I've been running a Debian derivative (LMDE) on a ZFS root for half a year now. It was created by cloning a regular ext4-based install with all the necessary packages onto a ZFS pool, chrooting into that pool and recreating a grub menu and bootloader. The system uses an ext-3 dedicated /boot partition. I would like to do the same with Ubuntu 14.04, but have encountered several obstacles. There is no Trusty zfs-grub package The default grub package doesn't have ZFS support built in. I found a small bug in the build system responsible for that (report with patch created) and built my own grub packages. The built-in ZFS support is dysfunctional, it does not add the proper arguments to the kernel command line I thus installed the ZoL grub package I also use on my LMDE system, which does give me a correct grub.cfg However, even with that correct grub.cfg, the boot process apparently doesn't retrieve the bootfs parameter from the ZFS pool; instead the variable that's supposed to receive the value remains empty. As a result, initrd tries to load the default pool ("rpool"), which fails of course. I can however import the pool by hand, and complete the process by hand. If memory serves me well, I also had to disable apparmor, to avoid the boot process from blocking after importing the pool. Am I overlooking something? Just for comparison, I installed the Ubuntu 3.13 kernel on my LMDE system, and that works just fine (i.e. the identical kernel and grub binaries allow successful booting without glitches on LMDE but not on Ubuntu).

    Read the article

  • Nonstatic conversion functions; Casting different types, e.g. DirectX vector to OpenGL vector

    - by Markus
    I am currently working on a game "engine" that needs to move values between a 3D engine, a physics engine and a scripting language. Since I need to apply vectors from the physics engine to 3D objects very often and want to be able to control both the 3D, as well as the physics objects through the scripting system, I need a mechanism to convert a vector of one type (e.g. vector3d<float>) to a vector of the other type (e.g. btVector3). Unfortunately I can make no assumptions on how the classes/structs are laid out, so a simple reinterpret_cast probably won't do. So the question is: Is there some sort of 'static'/non-member casting method to achieve basically this: vector3d<float> operator vector3d<float>(btVector3 vector) { // convert and return } btVector3 operator btVector3(vector3d<float> vector) { // convert and return } Right now this won't compile since casting operators need to be member methods. (error C2801: 'operator foo' must be a non-static member)

    Read the article

  • Semantic errors

    - by gautam kumar
    Can semantic errors be detected by the compiler or not? If not when do the errors get detected? As far as I know semantic errors are those errors which result from the expressions involving operators with incorrect number/type of operands. For example: n3=n1*n2;//n1 is integer, n2 is a string, n3 is an integer The above statement is semantically incorrect. But while reading C Primer Plus by Stephen Prata I found the following statement The compiler does not detect semantic errors, because they don't violate C rules. The compiler has no way of divining your true intentions. That leaves it to you to find these kinds of errors. One way is to compare what a program does to what you expected it to do. If not the compiler, who detects those errors? Am I missing something?

    Read the article

  • Creating LINQ to SQL for counting a parameter

    - by Matt
    I'm trying to translate a sql query into LINQ to SQL. I keep getting an error "sequence operators not supported for type 'system.string'" If I take out the distinct count part, it works. Is it not because I'm using the GROUP BY? SELECT COUNT(EpaValue) AS [Leak Count], Location, EpaValue AS [Leak Desc.] FROM ChartMes.dbo.RecourceActualEPA_Report WHERE (EpaName = N'LEAK1') AND (Timestamp) '20100429030000' GROUP BY EpaValue, Location ORDER BY Location, [Leak Count] DESC Dim temp = (From p In db2.RecourceActualEPA_Reports _ Where (p.Timestamp = str1stShiftStart) And (p.Timestamp < str2ndShiftCutoff) _ And (p.EpaName = "Leak1") _ Select p.EpaName.Distinct.Count(), p.Location, p.EpaValue)

    Read the article

  • What are solutions and tradeoffs to maintain search result consistency in a web application

    - by iammichael
    Consider a web application with a custom search function that must display the results in a paged manner (twenty per page with up to hundreds of thousands of total results) and the ability to drill down to individual results that maintain next/previous links to navigate through the results. Re-executing the search on each page request to get the appropriate results for that page of data can be too expensive (up to 15s per search). Also, since the underlying data can change frequently (e.g. addition of new results), re-executing could cause the next/previous functionality to result in inconsistent behavior (e.g. the same results reappearing on a later page after having been viewed on an earlier page). What options exist to ensure the search results can be viewed across multiple pages in a consistent manner, and what tradeoffs does each option have in terms of network, CPU, memory, and storage requirements? EDIT: I thought caching the query search results was an obvious necessity. The question is really asking about where to cache the result set and what tradeoffs might exist to each. For example, storing the ids of the entities in the result set on the client, or storing the IDs of the entities themselves in the users session on the web server, or in a temporary table in the database. I'm not looking specifically for a single solution as different scenarios may result in different approaches (and such a question would be more suited for stackoverflow.com rather than here), but more of a design comparison between the possible approaches.

    Read the article

  • What language available on commodity web hosts would suit a C# developer? [closed]

    - by billpg
    Recognising its ubiquity on commodity web hosting services, I tried developing in PHP a few years ago. I really didn't like it, later deciding that life was too short for PHP. (In brief, having to put $ on variable names; mis-spelt variable names become new variables; converting non-numeric strings to integers without complaint; the need for an "and this time I mean it" comparison operator.) In my ideal world, commodity web hosts would all support C#/ASP.NET, my preferred web-development language and framework, but this is not my ideal world. Even Mono has barely made a dent on Linux based hosts. However, last time I moaned about PHP's ubiquity, someone followed up that this was no longer the case, and that many other languages are now commonly usable on web hosts too. What programming language; a. Would suit a developer who prefers C#. b. Is available to run on many web hosts.

    Read the article

  • how to structure code that uses std::rel_ops

    - by R Samuel Klatchko
    I was working on some code and wanted to make use of std::rel_ops. From what I can tell, you need to do using std::rel_ops to your source code to make use of them. But I'm not sure where the best place to put that is. Let's say I have a header file with a class that only defines the minimal operator== and operator<: // foo.h class foo { public: bool operator==(const foo &other) const; bool operator<(const foo &other) const; }; I'm not sure where to put using std::rel_ops. If I leave it out of the foo.h, then every user of foo.h needs to know the implementation detail that foo is not defining all the operators itself. But putting using std::rel_ops inside foo.h breaks the rule of thumb about not having a using in a header file. How do other people resolve this issue?

    Read the article

  • Why does there seem to be a lot of fear in choosing the "wrong" language to learn?

    - by Shewbox
    Perhaps its just me, but as a current CS student I have already come across many questions on this site and elsewhere about not just "Which language should I use for x?" but also "Does anyone still use language Y?" My first CS class was taught in Scheme, which, if I'm not mistaken, isn't used widely (at least in comparison to languages like Java, PHP, Python, etc). Many of my classmates balked at the idea of having to learn a language they would never have to use again, but I don't quite understand where so much of this fear of learning less popular languages comes from. No, I may not use Scheme in any job I get, but I certainly don't regret having learned to use it (albeit in a very beginner, not very in-depth manner in that one semester). I am taking a search engines class this semester, which is done in Perl and again I am seeing classmates complaining about the language choice. I can understand having a favorite language and disliking others but why do some get worked up over learning it in the first place? Can you really learn the "wrong" language? Isn't learning something like Scheme or Haskell good mental exercise if nothing else, and useful at least to exposure to different ways of solving problems?

    Read the article

  • Is there a way to serialize automatically enums as int?

    - by FireAphis
    Hello, Is there a way to serialize enums automatically as int? Every time I define a new enum and write std::stringstream stream; stream << myenum1; stream >> myenum2; the compiler complains that the operators << and are not defined. Do you know a way to tell the compiler to treat enums as plain int's? What makes the problem harder is that, actually, the serialization is inside a template. Something like this: template <typename T> void serialize(const T& value) { std::stringstream stream; stream << value; } So I cannot add any casts :( Maybe I can specialize it somehow? Thank you.

    Read the article

  • NFS mount of /var/www to OS X

    - by ploughguy
    I have spent 2 hours trying to create an NFS mount from my Ubuntu 10.04 LTS server to my OS X desktop system. Objective: three way file compare between the code base on the Mac, the development system on the local Linux test system, and the hosted website. The hosted service uses cpanel so I can mount a webdisk - easy as pie - took 10 seconds. The local Ubuntu box, on the other hand - nothing but pain and frustration. Here is what I have tried: In File Browser, navigate to /var/www/site and right-click. Select share this folder. Enter sharename wwwsite and a comment. Click button "Create Share". Message says - you can only share file systems you own. There is a message on how to fix this, but the killer is that this is sharing by SMB. It will change the LFs to CR-LFs which will affect the file comparison. So forget this option. In a terminal window, run shares-admin (I have not been able to convince it to give me the "Shared Folders" option in the System Administration window - Maybe it is somewhere else in the menu, but I cannot find it) define an NFS export. Enter the path /var/www/site, select NFS enter the ip address of the iMac and save. On the mac, try to mount the file system using the usual methods - finder, command line "mount" command - not found. Nothing. Tried restarting the linux box in case there is a daemon that needs restarting - nothing. So I have run out of stuff to do. I have tried searching the documentation - it is pretty basic. The man page documentation is as opaque as ever. Please, oh please, will someone help me to get this @38&@^# thing to work! Thanks for reading this far... PG.

    Read the article

  • User Defined Class as a Template Parameter

    - by isurulucky
    Hi, I' m implementing a custom STL map. I need to make sure that any data type (basic or user defined) key will work with it. I declared the Map class as a template which has two parameters for the key and the value. My question is if I need to use a string as the key type, how can I overload the < and operators for string type keys only?? In template specialization we have to specialize the whole class with the type we need as I understand it. Is there any way I can do this in a better way?? What if I add a separate Key class and use it as the template type for Key? Thank You!!

    Read the article

  • using NEWSEQUENTIALID() with UPDATE Trigger

    - by Ram
    I am adding a new GUID/Uniqueidentifier column to my table. ALTER TABLE table_name ADD VersionNumber UNIQUEIDENTIFIER UNIQUE NOT NULL DEFAULT NEWSEQUENTIALID() GO And when ever a record is updated in the table, I would want to update this column "VersionNumber". So I create a new trigger CREATE TRIGGER [DBO].[TR_TABLE_NAMWE] ON [DBO].[TABLE_NAME] AFTER UPDATE AS BEGIN UPDATE TABLE_NAME SET VERSIONNUMBER=NEWSEQUENTIALID() FROM TABLE_NAME D JOIN INSERTED I ON D.ID=I.ID/* some ID which is used to join*/ END GO But just realized that NEWSEQUENTIALID() can only be used with CREATE TABLE or ALTER TABLE. I got this error The newsequentialid() built-in function can only be used in a DEFAULT expression for a column of type 'uniqueidentifier' in a CREATE TABLE or ALTER TABLE statement. It cannot be combined with other operators to form a complex scalar expression. Is there a workaround for this ? Edit1: Changing NEWSEQUENTIALID() to NEWID() in the trigger solves this, but I am indexing this column and using NEWID() would be sub-optimal

    Read the article

  • How to create a valid schema in a WSDL that restrict to <|<=|>|>=

    - by wsxedc
    This is what I have in my schema section of my WSDL to specify the field has to be comparison operators <xsd:simpleType> <xsd:restriction base="xsd:string"> <xsd:pattern value="&lt;|&gt;|&lt;=|&gt;=|="/> </xsd:restriction> </xsd:simpleType> SoapUI complains about this part of the WSDL, I tried to set the value to something with non special characters and the WSDL is valid. So I tried to replace that whole long string to be value=">gt;" and it valid but value="<lt;" is not valid, and value=">" is also not valid. My question is, why does the WSDL validation need > to be double escaped? The main question is, how to provide a valid less than side within the pattern value.

    Read the article

  • VB.NET logical expression evaluator

    - by Tim
    I need to test a logical expression held in a string to see if it evaluate to TRUE or FALSE.(the strig is built dynamically) For example the resulting string may contain "'dog'<'cat' OR (14 AND 4<6)". There are no variables in the string, it will logically evaluate. It will only contain simple operators = < < = <= and AND , OR and Open and Close Brackets, string constants and numbers. (converted to correct syntax && || etc.) I currently acheive this by creating a jscipt function and compiling it into a .dll. I then reference the .dll in my VB.NET project. class ExpressionEvaluator { function Evaluate(Expression : String) { return eval(Expression); } } Is there a simpler method using built in .NET functions or Lamdba expressions.

    Read the article

  • What's is the point of PImpl pattern while we can use interface for same purpose in C++?

    - by ZijingWu
    I see a lot of source code which using PIMPL idiom in C++. I assume Its purposes are hidden the private data/type/implementation, so it can resolve dependence, and then reduce compile time and header include issue. But interface class in C++ also have this capability, it can also used to hidden data/type and implementation. And to hidden let the caller just see the interface when create object, we can add an factory method in it declaration in interface header. The comparison is: Cost: The interface way cost is lower, because you doesn't even need to repeat the public wrapper function implementation void Bar::doWork() { return m_impl->doWork(); }, you just need to define the signature in the interface. Well understand: The interface technology is more well understand by every C++ developer. Performance: Interface way performance not worse than PIMPL idiom, both an extra memory access. I assume the performance is same. Following is the pseudocode code to illustrate my question: // Forward declaration can help you avoid include BarImpl header, and those included in BarImpl header. class BarImpl; class Bar { public: // public functions void doWork(); private: // You doesn't need to compile Bar.cpp after change the implementation in BarImpl.cpp BarImpl* m_impl; }; The same purpose can be implement using interface: // Bar.h class IBar { public: virtual ~IBar(){} // public functions virtual void doWork() = 0; }; // to only expose the interface instead of class name to caller IBar* createObject(); So what's the point of PIMPL?

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >