Search Results

Search found 5819 results on 233 pages for 'compiler theory'.

Page 2/233 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Integrating a Custom Compiler with the Visual Studio IDE

    - by M.A. Hanin
    Background: I want to create a custom VB compiler, extending the "original" compiler, to handle my custom compile-time attributes. Question: after I've created my custom compiler and I've got an executable file capable of compiling VB code via the standard command-line interface, how do I integrate this compiler with the Visual Studio IDE? (such that pressing "compile" or "build" will make use of my compiler instead of the default compiler). EDIT: (Correct me if i'm wrong) From the reactions here, I see this question is a bit shocking, so I shall further explain my needs and background: .NET provides us with a great mechanism called Attributes. As far as I understand, making attributes apply their intended behavior upon the attributed element (assembly, module, class, method, etc.) - attributes must be reflected upon. So the real trick here is reflecting and applying behavior at the right spot. Lets take Serialization for example: We decorate a class with the Serializable attribute. We then pass an instance of the class to the formatter's Serialize method. The formatter reflects upon the instance, checking if it has the Serializable attribute, and acting accordingly. Now, if we examine the Synchronization, Flags, Obsolete and CLSCompliant attributes, then the real question is: who reflects upon them? At least in some cases, it has to be the compiler (and/or IDE). Therefore, it seems that if I wish to create custom attributes that change an element's behavior regardless of any specific consumer, i must extend the compiler to reflect upon them at compilation. Of course, these are not my personal insights: the book "Applied .NET Attributes" provides a complete example of creating a custom attribute and a custom C# compiler to reflect upon that attribute at compilation (the example is used to implement "java-style checked exceptions").

    Read the article

  • IDEA modular problem (jsp)

    - by Jeriho
    I have project in with 2 separate modules(frontend and backend, first depends on second). When I'm trying to access backend code from frontend code, things going fine. Things turn for the worse when I do the same from jsp. This is stacktrase for simple accessign bean <jsp:useBean id="mybean" class="backend.main.MyBean" scope="request"></jsp:useBean> org.apache.jasper.JasperException: /results.jsp(9,0) The value for the useBean class attribute backend.main.MyBean is invalid. org.apache.jasper.compiler.DefaultErrorHandler.jspError(DefaultErrorHandler.java:40) org.apache.jasper.compiler.ErrorDispatcher.dispatch(ErrorDispatcher.java:407) org.apache.jasper.compiler.ErrorDispatcher.jspError(ErrorDispatcher.java:148) org.apache.jasper.compiler.Generator$GenerateVisitor.visit(Generator.java:1220) org.apache.jasper.compiler.Node$UseBean.accept(Node.java:1178) org.apache.jasper.compiler.Node$Nodes.visit(Node.java:2361) org.apache.jasper.compiler.Node$Visitor.visitBody(Node.java:2411) org.apache.jasper.compiler.Node$Visitor.visit(Node.java:2417) org.apache.jasper.compiler.Node$Root.accept(Node.java:495) org.apache.jasper.compiler.Node$Nodes.visit(Node.java:2361) org.apache.jasper.compiler.Generator.generate(Generator.java:3416) org.apache.jasper.compiler.Compiler.generateJava(Compiler.java:231) org.apache.jasper.compiler.Compiler.compile(Compiler.java:347) org.apache.jasper.compiler.Compiler.compile(Compiler.java:327) org.apache.jasper.compiler.Compiler.compile(Compiler.java:314) org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:589) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:317) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:313) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:260) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) And this error will appear if I try to access regular class: An error occurred at line: 12 in the jsp file: /results.jsp backend.main.RegularClass cannot be resolved to a type Stacktrace: org.apache.jasper.compiler.DefaultErrorHandler.javacError(DefaultErrorHandler.java:92) org.apache.jasper.compiler.ErrorDispatcher.javacError(ErrorDispatcher.java:330) org.apache.jasper.compiler.JDTCompiler.generateClass(JDTCompiler.java:439) org.apache.jasper.compiler.Compiler.compile(Compiler.java:349) org.apache.jasper.compiler.Compiler.compile(Compiler.java:327) org.apache.jasper.compiler.Compiler.compile(Compiler.java:314) org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:589) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:317) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:313) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:260) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) Sorry for so many stacktraces.

    Read the article

  • Is Google Closure a true compiler?

    - by James Allardice
    This question is inspired by the debate in the comments on this Stack Overflow question. The Google Closure Compiler documentation states the following (emphasis added): The Closure Compiler is a tool for making JavaScript download and run faster. It is a true compiler for JavaScript. Instead of compiling from a source language to machine code, it compiles from JavaScript to better JavaScript. However, Wikipedia gives the following definition of a "compiler": A compiler is a computer program (or set of programs) that transforms source code written in a programming language (the source language) into another computer language... A language rewriter is usually a program that translates the form of expressions without a change of language. Based on that, I would say that Google Closure is not a compiler. But the fact that Google explicitly state that it is in fact a "true compiler" makes me wonder if there's more to it. Is Google Closure really a JavaScript compiler?

    Read the article

  • Why does NUnit ignore datapoints when using generics in a theory

    - by The Chairman
    I'm trying to make use of the TheoryAttribute, introduced in NUnit 2.5. Everything works fine as long as the arguments are of a defined type: [Datapoint] public double[,] Array2X2 = new double[,] { { 1, 0 }, { 0, 1 } }; [Theory] public void TestForArbitraryArray(double[,] array) { // ... } It does not work, when I use generics: [Datapoint] public double[,] Array2X2 = new double[,] { { 1, 0 }, { 0, 1 } }; [Theory] public void TestForArbitraryArray<T>(T[,] array) { // ... } NUnit gives a warning saying No arguments were provided. Why is that?

    Read the article

  • Is there a good example of the difference between practice and theory?

    - by a_person
    There has been a lot of posters advising that the best way to retain knowledge is to apply it practically. After ignoring said advice for several years in a futile attempt to accumulate enough theoretical knowledge to be prepared for every possible case scenario, the process which lead me to assembling a library that's easily worth ~6K, I finally get it. I would like to share my story in the hopes that others will avoid taking the same route that was taken by me. I've selected graphical format (photos with caption to be exact) as my media. Help me with your ideas, maybe a fragment of code, or other imagery that would convey a message of the inherent difference between practice and theory.

    Read the article

  • Is it possible to test a theory?

    - by user363295
    We are a group of students who are working on a theory in software engineering (talking about the theory takes a lot of time so I just skip that). Implementing the theory is impossible, due to technical limitations, but it can be proven on a paper logically. We've been pushed to do a testing on it, so it can be proved that way too (although we bleieve that's not possible!), now: Basically, is it possible to test something like this? If it is, what type of testing should we use? I heard,its possible to handout a brief about it to some experts and asking about their opinion (not sure if that's true), is that a testing method? if it is, what does it called? and how exactly can be done?

    Read the article

  • Automatically find compiler options for fastest exe on given machine?

    - by dehmann
    Is there a method to automatically find the best compiler options (on a given machine), which result in the fastest possible executable? Naturally, I use g++ -O3, but there are additional flags that may make the code run faster, e.g. -ffast-math and others, some of which are hardware-dependent. Does anyone know some code I can put in my configure.ac file (GNU autotools), so that the flags will be added to the Makefile automatically by the ./configure command? In addition to automatically determining the best flags, I would be interested in some useful compiler flags that are good to use as a default for most optimized executables.

    Read the article

  • Graph theory in python

    - by Dan
    I was wondering how people deal with graph theory in python? How is a graph stored? Are there libraries for this? For example how would I input a graph and then find its Chromatic polynomial? Or its girth? Or the number of unique spanning trees? How about problems that involve edge weight like salesman problems? I don't need all of these answered, I'm just looking for a method or tool set that will be able to help me approach solve problems like this. Thanks, Dan

    Read the article

  • What is the relaxation condition in graph theory

    - by windopal
    Hi, I'm trying to understand the main concepts of graph theory and the algorithms within it. Most algorithms seem to contain a "Relaxation Condition" I'm unsure about what this is. Could some one explain it to me please. An example of this is dijkstras algorithm, here is the pseudo-code. 1 function Dijkstra(Graph, source): 2 for each vertex v in Graph: // Initializations 3 dist[v] := infinity // Unknown distance function from source to v 4 previous[v] := undefined // Previous node in optimal path from source 5 dist[source] := 0 // Distance from source to source 6 Q := the set of all nodes in Graph // All nodes in the graph are unoptimized - thus are in Q 7 while Q is not empty: // The main loop 8 u := vertex in Q with smallest dist[] 9 if dist[u] = infinity: 10 break // all remaining vertices are inaccessible from source 11 remove u from Q 12 for each neighbor v of u: // where v has not yet been removed from Q. 13 alt := dist[u] + dist_between(u, v) 14 if alt < dist[v]: // Relax (u,v,a) 15 dist[v] := alt 16 previous[v] := u 17 return dist[] Thanks

    Read the article

  • Set Theory and .NET

    - by MasterMax1313
    Recently I came across a situation where set theory and set math fit what I was doing to the letter (granted there was an easier way to accomplish what I needed - i.e. LINQ - but I didn't think of that at the time). However I didn't know of any generic set libraries. Granted IEnumerables provide some set operations (Union, etc.), but nothing like Intersection or set comparison. Can anyone point out something that fits here? Something that implements set math using a generic type?

    Read the article

  • Is there a theory for "transactional" sequences of failing and no-fail actions?

    - by Ross Bencina
    My question is about writing transaction-like functions that execute sequences of actions, some of which may fail. It is related to the general C++ principle "destructors can't throw," no-fail property, and maybe also with multi-phase transactions or exception safety. However, I'm thinking about it in language-neutral terms. My concern is with correctly designing error handling in C++ functions that must be reliable. I would like to know what the concepts below are called so that I can learn more about them. I'm sorry that I can't ask the question more directly. Since I don't know this area I have provided an example to explain my question. The question is at the end. Here goes: Consider a sequence of steps or actions executed sequentially, where actions belong to one of two classes: those that always succeed, and those that may fail. In the examples below: S stands for an action that always succeeds (called "no-fail" in some settings). F stands for an action that may fail (for example, it might fail to allocate memory or do I/O that could fail). Consider a sequences of actions (executed sequentially from left to right): S->S->S->S Since each action in the sequence above succeeds, the whole sequence succeeds. On the other hand, the following sequence may fail because the last action may fail: S->S->S->F So, claim: a sequence has the no-fail (S) property if and only if all of its actions are no-fail. Now, I'm interested in action sequences that form "atomic transactions", with "failure atomicity," i.e. where either the whole sequence completes successfully, or there is no effect. I.e. if some action fails, the earlier ones must be rolled back. This requires that any successfully executed actions prior to a failing action must always be able to be rolled back. Consider the sequence: S->S->S->F S<-S<-S In the example above, the first row is the forward path of the transaction, and the second row are inverse actions (executed from right to left) that can be used to roll back if the final top row actions fails. It seems to me that for a transaction to support failure atomicity, the following invariant must hold: Claim: To support failure atomicity (either completion or complete roll-back on failure) all actions preceding the latest failable (F) action on the forward path (marked * in the example below) must have no-fail (S) inverses. The following is an example of a sequence that supports failure atomicity: * S->F->F->F S<-S<-S Further, if we want the transaction to be able to attempt cancellation mid-way through, but still guarantee either full completion or full rollback then we need the following property: Claim: To support failure atomicity and cancellation mid-way through execution, in the face of errors in the inverse (cancellation) path, all actions following the earliest failable (F) inverse on the reverse path (marked *) must be no-fail (S). F->F->F->S->S S<-S<-F<-F * I believe that these two conditions guarantee that an abortable/cancelable transaction will never get "stuck". My questions are: What is the study and theory of these properties called? are my claims correct? and what else is there to know? UPDATE 1: Updated terminology: what I previously called "robustness" is called atomicity in the database literature. UPDATE 2: Added explicit reference to failure atomicity, which seems to be a thing.

    Read the article

  • What does a JIT compiler do?

    - by mehmet6parmak
    Hi all, I was just watching the google IO videos and they talked about the JIT compiler they included in the android and showed a demo about performance improvements thanks to JIT compiler. I wondered what does exactly a JIT compiler do and wanted to hear from different people. So, What is the duty of a JIT compiler? Thanks all

    Read the article

  • compiler warning at C++ template base class

    - by eike
    I get a compiler warning, that I don't understand in that context, when I compile the "Child.cpp" from the following code. (Don't wonder: I stripped off my class declarations to the bare minuum, so the content will not make much sense, but you will see the problem quicker). I get the warning with VS2003 and VS2008 on the highest warning level. The code AbstractClass.h : #include <iostream> template<typename T> class AbstractClass { public: virtual void Cancel(); // { std::cout << "Abstract Cancel" << std::endl; }; virtual void Process() = 0; }; //outside definition. if I comment out this and take the inline //definition like above (currently commented out), I don't get //a compiler warning template<typename T> void AbstractClass<T>::Cancel() { std::cout << "Abstract Cancel" << std::endl; } Child.h : #include "AbstractClass.h" class Child : public AbstractClass<int> { public: virtual void Process(); }; Child.cpp : #include "Child.h" #include <iostream> void Child::Process() { std::cout << "Process" << std::endl; } The warning The class "Child" is derived from "AbstractClass". In "AbstractClass" there's the public method "AbstractClass::Cancel()". If I define the method outside of the class body (like in the code you see), I get the compiler warning... AbstractClass.h(7) : warning C4505: 'AbstractClass::Cancel' : unreferenced local function has been removed with [T=int] ...when I compile "Child.cpp". I do not understand this, because this is a public function and the compiler can't know if I later reference this method or not. And, in the end, I reference this method, because I call it in main.cpp and despite this compiler warning, this method works if I compile and link all files and execute the program: //main.cpp #include <iostream> #include "Child.h" int main() { Child child; child.Cancel(); //works, despite the warning } If I do define the Cancel() function as inline (you see it as out commented code in AbstractClass.h), then I don't get the compiler warning. Of course my program works, but I want to understand this warning or is this just a compiler mistake? Furthermore, if do not implement AbsctractClass as a template class (just for a test purpose in this case) I also don't get the compiler warning...?

    Read the article

  • Can someone provide a short code example of compiler bootstrapping?

    - by Jatin
    This Turing award lecture by Ken Thompson on topic "Reflections on Trusting Trust" gives good insight about how C compiler was made in C itself. Though I understand the crux, it still hasn't sunk in. So ultimately, once the compiler is written to do lexical analysis, parse trees, syntax analysis, byte code generation etc, a separate machine code is again written to do all that on compiler? Can anyone please explain with a small example of the procedure? Bootstrapping on wiki gives good insights, but only a rough view on it. PS: I am aware of the duplicates on the site, but found them to be an overview which I am already aware

    Read the article

  • Is there any self-improving compiler around?

    - by JohnIdol
    I am not aware of any self-improving compiler, but then again I am not much of a compiler-guy. Is there ANY self-improving compiler out there? Please note that I am talking about a compiler that improves itself - not a compiler that improves the code it compiles. Any pointers appreciated! Side-note: in case you're wondering why I am asking have a look at this post. Even if I agree with most of the arguments I am not too sure about the following: We have programs that can improve their code without human input now — they’re called compilers. ... hence my question.

    Read the article

  • A compiler for automata theory

    - by saadtaame
    I'm designing a programming language for automata theory. My goal is to allow programmers to use machines (DFA, NFA, etc...) as units in expressions. I'm confused whether the language should be compiled, interpreted, or jit-compiled! My intuition is that compilation is a good choice, for some operations might take too much time (converting NFA's to equivalent DFA's can be expensive). Translating to x86 seems good. There is one issue however: I want the user to be able to plot machines. Any ideas?

    Read the article

  • Why would the VB.NET compiler think an interface isn't implemented when it is?

    - by Dan Tao
    I have this happen sometimes, particularly with the INotifyPropertyChanged interface in my experience but I have no idea if the problem is limited to that single interface (which would seem bizarre) or not. Let's say I have some code set up like this. There's an interface with a single event. A class implements that interface. It includes the event. Public Interface INotifyPropertyChanged Event PropertyChanged As PropertyChangedEventHandler End Interface Public Class Person Implements INotifyPropertyChanged Public Event PropertyChanged _ (ByVal sender As Object, ByVal e As PropertyChangedEventArgs) _ Implements INotifyPropertyChanged.PropertyChanged ' more code below ' End Class Every now and then, when I build my project, the compiler will suddenly start acting like the above code is broken. It will report that the Person class does not implement INotifyPropertyChanged because it doesn't have a PropertyChanged event; or it will say the PropertyChanged event can't implement INotifyPropertyChanged.PropertyChanged because their signatures don't match. This is weird enough as it is, but here's the weirdest part: if I just cut out the line starting with Event PropertyChanged and then paste it back in, the error goes away. The project builds. Does anybody have any clue what could be going on here?

    Read the article

  • Color Theory: How to convert Munsell HVC to RGB/HSB/HSL

    - by Ian Boyd
    I'm looking at at document that describes the standard colors used in dentistry to describe the color of a tooth. They quote hue, value, chroma values, and indicate they are from the 1905 Munsell description of color: The system of colour notation developed by A. H. Munsell in 1905 identifies colour in terms of three attributes: HUE, VALUE (Brightness) and CHROMA (saturation) [15] HUE (H): Munsell defined hue as the quality by which we distinguish one colour from another. He selected five principle colours: red, yellow, green, blue, and purple; and five intermediate colours: yellow-red, green-yellow, blue-green, purple-blue, and red-purple. These were placed around a colour circle at equal points and the colours in between these points are a mixture of the two, in favour of the nearer point/colour (see Fig 1.). VALUE (V): This notation indicates the lightness or darkness of a colour in relation to a neutral grey scale, which extends from absolute black (value symbol 0) to absolute white (value symbol 10). This is essentially how ‘bright’ the colour is. CHROMA (C): This indicates the degree of divergence of a given hue from a neutral grey of the same value. The scale of chroma extends from 0 for a neutral grey to 10, 12, 14 or farther, depending upon the strength (saturation) of the sample to be evaluated. There are various systems for categorising colour, the Vita system is most commonly used in Dentistry. This uses the letters A, B, C and D to notate the hue (colour) of the tooth. The chroma and value are both indicated by a value from 1 to 4. A1 being lighter than A4, but A4 being more saturated than A1. If placed in order of value, i.e. brightness, the order from brightest to darkest would be: A1, B1, B2, A2, A3, D2, C1, B3, D3, D4, A3.5, B4, C2, A4, C3, C4 The exact values of Hue, Value and Chroma for each of the shades is shown below (16) So my question is, can anyone convert Munsell HVC into RGB, HSB or HSL? Hue Value (Brightness) Chroma(Saturation) === ================== ================== 4.5 7.80 1.7 2.4 7.45 2.6 1.3 7.40 2.9 1.6 7.05 3.2 1.6 6.70 3.1 5.1 7.75 1.6 4.3 7.50 2.2 2.3 7.25 3.2 2.4 7.00 3.2 4.3 7.30 1.6 2.8 6.90 2.3 2.6 6.70 2.3 1.6 6.30 2.9 3.0 7.35 1.8 1.8 7.10 2.3 3.7 7.05 2.4 They say that Value(Brightness) varies from 0..10, which is fine. So i take 7.05 to mean 70.5%. But what is Hue measured in? i'm used to hue being measured in degrees (0..360). But the values i see would all be red - when they should be more yellow, or brown. Finally, it says that Choma/Saturation can range from 0..10 ...or even higher - which makes it sound like an arbitrary scale. So can anyone convert Munsell HVC to HSB or HSL, or better yet, RGB?

    Read the article

  • Theory of formal languages - Automaton

    - by dader51
    Hi everybody ! I'm wondering about formal languages. I have a kind of parser : It reads à xml-like serialized tree structure and turn it into a multidimmensionnal array. I figured out that i need at least three variables to achieve the job : $tree = array(); // a new array $pTree = array(&$tree); // a new array which the first element points to $tree; $deep = 0; plus the one containing the sentence splitted into words. My point is on the similarities between the algorithm deing used and the differents kinds of automatons ( state machines turing machines stack ... ). The $words variable is the "tape" of the automaton, the test/conditions of the algorithm are transitions, $deep is the state and $tree is the output. I cannont figure what is $pTree. So the question is : which is the automaton I implictly use here, and to which formal languages family does it fit ? And what's about recursion ?

    Read the article

  • Theory: "Lexical Encoding"

    - by _ande_turner_
    I am using the term "Lexical Encoding" for my lack of a better one. A Word is arguably the fundamental unit of communication as opposed to a Letter. Unicode tries to assign a numeric value to each Letter of all known Alphabets. What is a Letter to one language, is a Glyph to another. Unicode 5.1 assigns more than 100,000 values to these Glyphs currently. Out of the approximately 180,000 Words being used in Modern English, it is said that with a vocabulary of about 2,000 Words, you should be able to converse in general terms. A "Lexical Encoding" would encode each Word not each Letter, and encapsulate them within a Sentence. // An simplified example of a "Lexical Encoding" String sentence = "How are you today?"; int[] sentence = { 93, 22, 14, 330, QUERY }; In this example each Token in the String was encoded as an Integer. The Encoding Scheme here simply assigned an int value based on generalised statistical ranking of word usage, and assigned a constant to the question mark. Ultimately, a Word has both a Spelling & Meaning though. Any "Lexical Encoding" would preserve the meaning and intent of the Sentence as a whole, and not be language specific. An English sentence would be encoded into "...language-neutral atomic elements of meaning ..." which could then be reconstituted into any language with a structured Syntactic Form and Grammatical Structure. What are other examples of "Lexical Encoding" techniques? If you were interested in where the word-usage statistics come from : http://www.wordcount.org

    Read the article

  • Discrete problem of probability theory [closed]

    - by calejero
    A jury consists of 12 persons each of which has, before the trial started, a probability of 0.4 to vote in favor of the defendant's innocence. During the trial, the lawyer has a probability of 0.6 to change the mind of each juror who was biased against the accused. How likely is the defendant to be acquitted if he needs 10 votes in favor?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >