Search Results

Search found 17686 results on 708 pages for 'high level'.

Page 572/708 | < Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >

  • encodeURIComponent is really useful?

    - by Marco Demaio
    Something I still don't understand when perfoming an http-get request to the server is what the advantage is in using JS fucntion encodeURIcomponent to encode each component of the http-get. Doing some tests I saw the server (using PHP) gets the values of the http-get request properly also if I don't use encodeURIcomponent!!! Obviuosly I still need to encode at client level the special character & ? = / : otherwise an http-get value like this "peace&love=virtue" would be considered as new key value pair of the http-get request instead of a one single value. But why does encodeURIcompenent encodes also many other charcaters like 'è' for example wich is translated into %C3%A8 that must be decoded on a PHP server using the utf8_decode function. By using encodeURIcomponent all values of the http-get request are utf8 encoded, therefor when getting them in PHP I have to call each time the utf8_decode function on each $_GET value which is quite annoying. Why can't we just encode only the & ? = / : charcaters??? see also: http://stackoverflow.com/questions/2607946/js-encodeuricomponent-result-different-from-the-one-created-by-form It shows that encodeURIComponent does not even encode properly because a simple browser FORM GET encodes charactrs like '€', in different way. So I still wonder what does this encodeURIComponent is for?

    Read the article

  • Classification: Dealing with Abstain/Rejected Class

    - by abner.ayala
    I am asking for your input and/help on a classification problem. If anyone have any references that I can read to help me solve my problem even better. I have a classification problem of four discrete and very well separated classes. However my input is continuous and has a high frequency (50Hz), since its a real-time problem. The circles represent the clusters of the classes, the blue line the decision boundary and Class 5 equals the (neutral/resting do nothing class). This class is the rejected class. However the problem is that when I move from one class to the other I activate a lot of false positives in the transition movements, since the movement is clearly non-linear. For example, every time I move from class 5 (neutral class) to 1 I first see a lot of 3's before getting to the 1 class. Ideally, I will want my decision boundary to look like the one in the picture below where the rejected class is Class =5. Has a higher decision boundary than the others classes to avoid misclassification during transition. I am currently implementing my algorithm in Matlab using naive bayes, kNN, and SVMs optimized algorithms using Matlab. Question: What is the best/common way to handle abstain/rejected classes classes? Should I use (fuzzy logic, loss function, should I include resting cluster in the training)?

    Read the article

  • Automatically grow document view of NSScrollView using auto layout?

    - by Monolo
    Is there a simple way to get an NSScrollView to adapt to its document view changing size when using autolayout (the Lion feature)? I have tried to call both setNeedsUpdateConstraints: and setNeedsLayout: on the document view, the clip view and the scroll view, without any results. fittingSize of the document view reports the correct size. An NSPopover in conjunction with an NSViewController handles this nicely, with the popover growing and shrinking as needed, and I was hoping to get a similar simple and robust behaviour with the scroll view. I have checked the documentation for scroll views, but they don't seem to be updated to use autolayout. Edited to clarify: The problem I experience is that the document view, which holds subviews, is not re-sized when the subviews change their size, even if they call invalidateIntrinsicContentSize. The contents of the document view are hence clipped to the original size of the document view as they grow. The document view is created in a nib and set as the scroll view's document view in an awakeFromBib method. What I hoped to obtain was that the document view frame would automatically be adjusted to when its fittingSize changes, and the scrollbars updated accordingly. NSPopover does something similar - provided that the subviews of the content controller's view have the constraints set right and various content hugging values are high enough (higher than the hidden popover window's hight constraint priority, for one).

    Read the article

  • Wanted to know in detail about how shared libraries work vis-a-vis static library.

    - by goldenmean
    Hello, I am working on creating and linking shared library (.so). While working with them, many questions popped up which i could not find satisying answers when i searched for them, hence putting them here. The questions about shared libraries i have are: 1.) How is shared library different than static library? What are the Key differences in way they are created, they execute? 2.) In case of a shared library at what point are the addresses where a particular function in shared library will be loaded and run from, given? Who gives those functions is load/run addresses? 3.) Will an application linked against shared library be slower in execution as compared to that which is linked with a static library? 4.) Will application executable size differ in these two cases? 5.) Can one do source level debugging of by stepping into functions defined inside a shared library? Is any thing extra needed to make these functions visible to the application? 6.) What are pros and cons in using either kind of library? Thanks. -AD

    Read the article

  • Where are the real risks in network security?

    - by Barry Brown
    Anytime a username/password authentication is used, the common wisdom is to protect the transport of that data using encryption (SSL, HTTPS, etc). But that leaves the end points potentially vulnerable. Realistically, which is at greater risk of intrusion? Transport layer: Compromised via wireless packet sniffing, malicious wiretapping, etc. Transport devices: Risks include ISPs and Internet backbone operators sniffing data. End-user device: Vulnerable to spyware, key loggers, shoulder surfing, and so forth. Remote server: Many uncontrollable vulnerabilities including malicious operators, break-ins resulting in stolen data, physically heisting servers, backups kept in insecure places, and much more. My gut reaction is that although the transport layer is relatively easy to protect via SSL, the risks in the other areas are much, much greater, especially at the end points. For example, at home my computer connects directly to my router; from there it goes straight to my ISPs routers and onto the Internet. I would estimate the risks at the transport level (both software and hardware) at low to non-existant. But what security does the server I'm connected to have? Have they been hacked into? Is the operator collecting usernames and passwords, knowing that most people use the same information at other websites? Likewise, has my computer been compromised by malware? Those seem like much greater risks. What do you think?

    Read the article

  • Using Active Directory to authenticate users in a WWW facing website

    - by Basiclife
    Hi, I'm looking at starting a new web app which needs to be secure (if for no other reason than that we'll need PCI accreditation at some point). From previous experience working with PCI (on a domain), the preferred method is to use integrated windows authentication which is then passed all the way through the app to the database. This allows for better auditing as well as object-level permissions (ie an end user can't read the credit card table). There are advantages in that even if someone compromises the webserver, they won't be able to glean any additional information from the database. Also, the webserver isn't storing any database credentials (beyond perhaps a simple anonymous user with very few permissions) So, now I'm looking at the new web app which will be on the public internet. One suggestion is to have a Active Directory server and create windows accounts on the AD for each user of the site. These users will then be placed into the appropriate NT groups to decide which DB permissions they should have (and which pages they can access). ASP already provides the AD membership provider and role provider so this should be fairly simple to implement. There are a number of questions around this - Scalability, reliability, etc... and I was wondering if there is anyone out there with experience of this approach or, even better, some good reasons why to do it / not to do it. Any input appreciated Regards Basiclife

    Read the article

  • Best way for launching html/jsp to communicate with GWT module

    - by h2g2java
    I asked this at the GWT forum but I'm impatient for the answer and I seem to get rather good responses here. A html or jsp file is used to launch the xxx.nocache.js, which then decides which browser "permutation" to use. <head> <meta http-equiv="content-type" content="text/html;charset=UTF-8"> <title>xxx</title> <script type="text/javascript" language="javascript" src="xxx.nocache.js"></script> </head> In my case, I am using a jsp. When the JSP is executed, it discovers some conditions. I wish to pass these conditions as values to the GWT module being launched. The "elegant" GWT way to pass these values would be to persist them as request/memcache attributes and then have the GWT module perform RPC to retrieve those values. For example, the JSP discovers that the current user is Whoopy. Shouldn't I simply have the JSP generate javascript to store user = "Whoopy" as a top or namedframe level javascript variable and use JSNI within the module to retrieve the value for user? I have not tried it yet, but I would like to know how anyone might have done it without having to use RPC.

    Read the article

  • how to get the real bounds with google maps when fully zoomed out

    - by brad
    I have a map that shows location points based on the gbounds of the map. For example, any time the map is moved/zoomed, i find the bounds and query for locations that fall within those bounds. Unfortunately I'm unable to display all my locations when fully zoomed out. Reason being, gmaps reports the min/max long as whatever is at the edge of the map, but if you zoom out enough, you can get a longitudinal range that excludes visible locations. For instance, if you zoom your map so that you see NorthAmerica twice, on the far left and far right. The min/max long are around: -36.5625 to 170.15625. But this almost completely excludes NorthAmerica which lies in the -180 to -60 range. Obviously this is bothersome as you can actually see the continent NorthAmerica (twice), but when I query my for locations in the range from google maps, NorthAmerica isn't returned. My code for finding the min/max long is: bounds = gmap.getBounds(); min_lat = bounds.getSouthWest().lat() max_lat = bounds.getNorthEast().lat() Has anyone encountered this and can anyone suggest a workaround? Off the top of my head I can only thing of a hack: to check the zoom level and hardcode the min/max lats to -180/180 if necessary, which is definitely unacceptable.

    Read the article

  • Compile and optimize for different target architectures

    - by Peter Smit
    Summary: I want to take advantage of compiler optimizations and processor instruction sets, but still have a portable application (running on different processors). Normally I could indeed compile 5 times and let the user choose the right one to run. My question is: how can I can automate this, so that the processor is detected at runtime and the right executable is executed without the user having to chose it? I have an application with a lot of low level math calculations. These calculations will typically run for a long time. I would like to take advantage of as much optimization as possible, preferably also of (not always supported) instruction sets. On the other hand I would like my application to be portable and easy to use (so I would not like to compile 5 different versions and let the user choose). Is there a possibility to compile 5 different versions of my code and run dynamically the most optimized version that's possible at execution time? With 5 different versions I mean with different instruction sets and different optimizations for processors. I don't care about the size of the application. At this moment I'm using gcc on Linux (my code is in C++), but I'm also interested in this for the Intel compiler and for the MinGW compiler for compilation to Windows. The executable doesn't have to be able to run on different OS'es, but ideally there would be something possible with automatically selecting 32 bit and 64 bit as well. Edit: Please give clear pointers how to do it, preferably with small code examples or links to explanations. From my point of view I need a super generic solution, which is applicable on any random C++ project I have later. Edit I assigned the bounty to ShuggyCoUk, he had a great number of pointers to look out for. I would have liked to split it between multiple answers but that is not possible. I'm not having this implemented yet, so the question is still 'open'! Please, still add and/or improve answers, even though there is no bounty to be given anymore. Thanks everybody!

    Read the article

  • Disabling normal link behaviour on click with an if statement in Jquery?

    - by Qwibble
    I've been banging my head with this all day, trying anything and everything I can think of to no gain. I'm no Jquery guru, so am hoping someone here can help me out. What I'm trying to do in pseudo code (concept) seems practical and easy to implement, but obviously not so =D What I have is a navigation sidebar, with sub menu's within some li's, and a link at the head of each li. When a top level link is clicked, I want the jquery to check if the link has a sibling ul. If it does, the link doesn't act like a link, but slides down the sibling sub nav. If there isn't one, the link should act normally. Here's where I'm at currently, what am I doing wrong? $('ul.navigation li a').not('ul.navigation li ul li a').click(function (){ if($(this).parent().contains('ul')){ $('ul.navigation li ul').slideUp(); $(this).siblings('ul').slideDown(); return false; } else{ alert('Link Clicked') // Maybe do some more stuff in here... } }); Any ideas?

    Read the article

  • Why does adding Crossover to my Genetic Algorithm gives me worse results?

    - by MahlerFive
    I have implemented a Genetic Algorithm to solve the Traveling Salesman Problem (TSP). When I use only mutation, I find better solutions than when I add in crossover. I know that normal crossover methods do not work for TSP, so I implemented both the Ordered Crossover and the PMX Crossover methods, and both suffer from bad results. Here are the other parameters I'm using: Mutation: Single Swap Mutation or Inverted Subsequence Mutation (as described by Tiendil here) with mutation rates tested between 1% and 25%. Selection: Roulette Wheel Selection Fitness function: 1 / distance of tour Population size: Tested 100, 200, 500, I also run the GA 5 times so that I have a variety of starting populations. Stop Condition: 2500 generations With the same dataset of 26 points, I usually get results of about 500-600 distance using purely mutation with high mutation rates. When adding crossover my results are usually in the 800 distance range. The other confusing thing is that I have also implemented a very simple Hill-Climbing algorithm to solve the problem and when I run that 1000 times (faster than running the GA 5 times) I get results around 410-450 distance, and I would expect to get better results using a GA. Any ideas as to why my GA performing worse when I add crossover? And why is it performing much worse than a simple Hill-Climb algorithm which should get stuck on local maxima as it has no way of exploring once it finds a local max?

    Read the article

  • Window message procedures in Linux vs Windows

    - by mizipzor
    In Windows when you create a window, you must define a (c++) LRESULT CALLBACK message_proc(HWND Handle, UINT Message, WPARAM WParam, LPARAM LParam); to handle all the messages sent from the OS to the window, like keypresses and such. Im looking to do some reading on how the same system works in Linux. Maybe it is because I fall a bit short on the terminology but I fail to find anything on this through google (although Im sure there must be plenty!). Is it still just one single C function that handles all the communication? Does the function definition differ on different WMs (Gnome, KDE) or is it handled on a lower level in the OS? Edit: Ive looked into tools like QT and WxWidgets, but those frameworks seems to be geared more towards developing GUI extensive applications. Im rather looking for a way to create a basic window (restrict resize, borders/decorations) for my OGL graphics and retrieve input on more than one platform. And according to my initial research, this kind of function is the only way to retrieve that input. What would be the best route? Reading up, learning and then use QT or WxWidgets? Or learning how the systems work and implement those few basic features I want myself?

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • Can a class inherit from LambdaExpression in .NET? Or is this not recommended?

    - by d.
    Consider the following code (C# 4.0): public class Foo : LambdaExpression { } This throws the following design-time error: Foo does not implement inherited abstract member System.Linq.Expressions.LambdaExpression.Accept(System.Linq.Expressions.Compiler.StackSpiller) There's absolutely no problem with public class Foo : Expression { } but, out of curiosity and for the sake of learning, I've searched in Google System.Linq.Expressions.LambdaExpression.Accept(System.Linq.Expressions.Compiler.StackSpiller) and guess what: zero results returned (when was the last time you saw that?). Needless to say, I haven't found any documentation on this method anywhere else. As I said, one can easily inherit from Expression; on the other hand LambdaExpression, while not marked as sealed (Expression<TDelegate> inherits from it), seems to be designed to prevent inheriting from it. Is this actually the case? Does anyone out there know what this method is about? EDIT (1): More info based on the first answers - If you try to implement Accept, the editor (C# 2010 Express) automatically gives you the following stub: protected override Expression Accept(System.Linq.Expressions.ExpressionVisitor visitor) { return base.Accept(visitor); } But you still get the same error. If you try to use a parameter of type StackSpiller directly, the compiler throws a different error: System.Linq.Expressions.Compiler.StackSpiller is inaccessible due to its protection level. EDIT (2): Based on other answers, inheriting from LambdaExpression is not possible so the question as to whether or not it is recommended becomes irrelevant. I wonder if, in cases like this, the error message should be Foo cannot implement inherited abstract member System.Linq.Expressions.LambdaExpression.Accept(System.Linq.Expressions.Compiler.StackSpiller) because [reasons go here]; the current error message (as some answers prove) seems to tell me that all I need to do is implement Accept (which I can't do).

    Read the article

  • How do I process the configure file when cross-compiling with mingw?

    - by vy32
    I have a small open source program that builds with an autoconf configure script. I ran configure I tried to compile with: make CC="/opt/local/bin/i386-mingw32-g++" That didn't work because the configure script found include files that were not available to the mingw system. So then I tried: ./configure CC="/opt/local/bin/i386-mingw32-g++" But that didn't work; the configure script gives me this error: ./configure: line 5209: syntax error near unexpected token `newline' ./configure: line 5209: ` *_cv_*' Because of this code: # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, we kill variables containing newlines. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. ( for ac_var in `(set) 2>&1 | sed -n 's/^\(a-zA-Z_a-zA-Z0-9_*\)=.*/\1/p'`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_* fi Which is generated then the AC_OUTPUT is called. Any thoughts? Is there a correct way to do this?

    Read the article

  • File transfer eating alot of CPU

    - by Dan C.
    I'm trying to transfer a file over a IHttpHandler, the code is pretty simple. However when i start a single transfer it uses about 20% of the CPU. If i were to scale this to 20 simultaneous transfers the CPU is very high. Is there a better way I can be doing this to keep the CPU lower? the client code just sends over chunks of the file 64KB at a time. public void ProcessRequest(HttpContext context) { if (context.Request.Params["secretKey"] == null) { } else { accessCode = context.Request.Params["secretKey"].ToString(); } if (accessCode == "test") { string fileName = context.Request.Params["fileName"].ToString(); byte[] buffer = Convert.FromBase64String(context.Request.Form["data"]); string fileGuid = context.Request.Params["smGuid"].ToString(); string user = context.Request.Params["user"].ToString(); SaveFile(fileName, buffer, user); } } public void SaveFile(string fileName, byte[] buffer, string user) { string DirPath = @"E:\Filestorage\" + user + @"\"; if (!Directory.Exists(DirPath)) { Directory.CreateDirectory(DirPath); } string FilePath = @"E:\Filestorage\" + user + @"\" + fileName; FileStream writer = new FileStream(FilePath, File.Exists(FilePath) ? FileMode.Append : FileMode.Create, FileAccess.Write, FileShare.ReadWrite); writer.Write(buffer, 0, buffer.Length); writer.Close(); }

    Read the article

  • GridView's NewValues and OldValues empty in the OnRowUpdating event.

    - by Abe Miessler
    I have the GridView below. I am binding to a custom datasource in the code behind. It gets into the "OnRowUpdating" event just fine, but there are no NewValues or OldValues. Any suggestions as to how I can get these values? <asp:GridView ID="gv_Personnel" runat="server" OnRowDataBound="gv_Personnel_DataBind" OnRowCancelingEdit="gv_Personnel_CancelEdit" OnRowEditing="gv_Personnel_EditRow" OnRowUpdating="gv_Personnel_UpdateRow" AutoGenerateColumns="false" ShowFooter="true" DataKeyNames="BudgetLineID" AutoGenerateEditButton="true" AutoGenerateDeleteButton="true" > <Columns> <asp:BoundField HeaderText="Level of Staff" DataField="LineDescription" /> <%--<asp:BoundField HeaderText="Hrs/Units requested" DataField="NumberOfUnits" />--%> <asp:TemplateField HeaderText="Hrs/Units requested"> <ItemTemplate> <%# Eval("NumberOfUnits")%> </ItemTemplate> <EditItemTemplate> <asp:TextBox ID="tb_NumUnits" runat="server" Text='<%# Bind("NumberOfUnits")%>' /> </EditItemTemplate> </asp:TemplateField> <asp:BoundField HeaderText="Hrs/Units of Applicant Cost Share" DataField="" NullDisplayText="0" /> <asp:BoundField HeaderText="Hrs/Units of Partner Cost Share" DataField="" NullDisplayText="0" /> <asp:BoundField FooterStyle-Font-Bold="true" FooterText="TOTAL PERSONNEL SERVICES:" HeaderText="Rate" DataFormatString="{0:C}" DataField="UnitPrice" /> <asp:TemplateField HeaderText="Amount Requested" ItemStyle-HorizontalAlign="Right" FooterStyle-HorizontalAlign="Right" FooterStyle-BorderWidth="2" FooterStyle-Font-Bold="true"/> <asp:TemplateField HeaderText="Applicant Cost Share" ItemStyle-HorizontalAlign="Right" FooterStyle-HorizontalAlign="Right" FooterStyle-BorderWidth="2" FooterStyle-Font-Bold="true"/> <asp:TemplateField HeaderText="Partner Cost Share" ItemStyle-HorizontalAlign="Right" FooterStyle-HorizontalAlign="Right" FooterStyle-BorderWidth="2" FooterStyle-Font-Bold="true"/> <asp:TemplateField HeaderText="Total Projet Cost" ItemStyle-HorizontalAlign="Right" FooterStyle-HorizontalAlign="Right" FooterStyle-BorderWidth="2" FooterStyle-Font-Bold="true"/> </Columns> </asp:GridView>

    Read the article

  • Help needed in grokking password hashes and salts

    - by javafueled
    I've read a number of SO questions on this topic, but grokking the applied practice of storing a salted hash of a password eludes me. Let's start with some ground rules: a password, "foobar12" (we are not discussing the strength of the password). a language, Java 1.6 for this discussion a database, postgreSQL, MySQL, SQL Server, Oracle Several options are available to storing the password, but I want to think about one (1): Store the password hashed with random salt in the DB, one column Found on SO and elsewhere is the automatic fail of plaintext, MD5/SHA1, and dual-columns. The latter have pros and cons MD5/SHA1 is simple. MessageDigest in Java provides MD5, SHA1 (through SHA512 in modern implementations, certainly 1.6). Additionally, most RDBMSs listed provide methods for MD5 encryption functions on inserts, updates, etc. The problems become evident once one groks "rainbow tables" and MD5 collisions (and I've grokked these concepts). Dual-column solutions rest on the idea that the salt does not need to be secret (grok it). However, a second column introduces a complexity that might not be a luxury if you have a legacy system with one (1) column for the password and the cost of updating the table and the code could be too high. But it is storing the password hashed with a random salt in single DB column that I need to understand better, with practical application. I like this solution for a couple of reasons: a salt is expected and considers legacy boundaries. Here's where I get lost: if the salt is random and hashed with the password, how can the system ever match the password? I have theory on this, and as I type I might be grokking the concept: Given a random salt of 128 bytes and a password of 8 bytes ('foobar12'), it could be programmatically possible to remove the part of the hash that was the salt, by hashing a random 128 byte salt and getting the substring of the original hash that is the hashed password. Then re hashing to match using the hash algorithm...??? So... any takers on helping. :) Am I close?

    Read the article

  • Using TPrinter in Delphi

    - by Milad
    Hello Experts I don't have any backgrounds in programming and this is my first shot. I wrote a Delphi program that is supposed to print on a result sheet. I work in an institute and we have to establish hundreds of result sheets every 2 months. It's really difficult to do that and different handwritings is also an important issue. My problem is that when i write this code : if PrintDialog.Execute() then begin with MyPrinter do begin MyPrinter.BeginDoc();//Start Printing //Prints First Name MyPrinter.Canvas.TextOut(FirstNameX,FirstNameY,EditFirstName.Text); //Prints Last Name MyPrinter.Canvas.TextOut(LastNameX,LastNameY,EditLastName.Text); //Prints Level MyPrinter.Canvas.TextOut(LevelX,LevelY,EditLevel.Text); //Prints Date MyPrinter.Canvas.TextOut(DateX,DateY,MEditDate.Text); //Prints Student Number MyPrinter.Canvas.TextOut(StdNumX,StdNumY,EditStdnumber.Text); .... MyPrinter.EndDoc();//End Printing end; end; I can't get the right coordinates to print properly. Am I missing something? How can I set the right coordinates? You know TPrinter uses pixels to get the coordinates but papers are measured in inches or centimeters. I'm really confused.I appreciate any help. Thanks in advance.

    Read the article

  • How to nest shapes in a DSL Tools diagram?

    - by Paul Lalonde
    I have a DSL containing two main domain classes: Area and Entity. Areas are represented visually by a GeometryShape, whereas entities are represented by a CompartmentShape. Entities can be embedded in an Area, or not (in this case they are embedded in the root object, which is a kind of Area). There may be relationships between entities, including between entities in different areas. Areas cannot be embedded inside of other areas, nor entities embedded inside of other entities. My problem is that I cannot get the behavior I want from the diagram. The embedding of entities in areas works perfectly well at the model level, but the visual representation behaves erratically. For example, if I drag an entity that was created in an area outside of that area, it no longer responds to mouse clicks (I have code that performs the re-parenting, but somehow the diagram side of things is broken). I have searched high and low for samples of how to do this, and come up empty. Every example I've found on the web simulates nesting via "references" relationships, whereas I am performing true embedding of the domain classes (and therefore of their associated shape classes). Does anyone have an example of how to do this? While I'm venting, am I the only one who thinks the diagram/shape classes are massively under-documented?

    Read the article

  • Create image from scratch with JMagick

    - by Michael IV
    I am using Java port of ImageMagick called JMagick .I need to be able to create a new image and write an arbitrary text chunk into it.The docs are very poor and what I managed to get so far is to write text into the image which comes from IO.Also , in all the examples I have found it seems like the very first operation ,before writing new image data , is always loading of an existing image into ImageInfo instance.How do I create an image from scratch with JMagick and then write a text into it? Here is what I do now : try { ImageInfo info = new ImageInfo(); info.setSize("512x512"); info.setUnits(ResolutionType.PixelsPerInchResolution); info.setColorspace(ColorspaceType.RGBColorspace); info.setBorderColor(PixelPacket.queryColorDatabase("red")); info.setDepth(8); BufferedImage img = new BufferedImage(512,512,BufferedImage.TYPE_4BYTE_ABGR); byte[] imageBytes = ((DataBufferByte) img.getData().getDataBuffer()).getData(); MagickImage mimage = new MagickImage(info,imageBytes); DrawInfo aInfo = new DrawInfo(info); aInfo.setFill(PixelPacket.queryColorDatabase("green")); aInfo.setUnderColor(PixelPacket.queryColorDatabase("yellow")); aInfo.setOpacity(0); aInfo.setPointsize(36); aInfo.setFont("Arial"); aInfo.setTextAntialias(true); aInfo.setText("JMagick Tutorial"); aInfo.setGeometry("+40+40"); mimage.annotateImage(aInfo); mimage.setFileName("text.jpg"); mimage.writeImage(info); } catch (MagickException ex) { Logger.getLogger(LWJGL_IDOMOO_SIMPLE_TEST.class.getName()).log(Level.SEVERE, null, ex); } It doesn't work , the JVM crashes with access violation as it probably expects for the input image from IO.

    Read the article

  • Validation with State Pattern for Multi-Page Forms in ASP.NET

    - by philrabin
    I'm trying to implement the state pattern for a multi-page registration form. The data on each page will be accumulated and stored in a session object. Should validation (including service layer calls to the DB) occur on the page level or inside each state class? In other words, should the concrete implementation of IState be concerned with the validation or should it be given a fully populated and valid object? See "EmptyFormState" class below: namespace Example { public class Registrar { private readonly IState formEmptyState; private readonly IState baseInformationComplete; public RegistrarSessionData RegistrarSessionData { get; set;} public Registrar() { RegistrarSessionData = new RegistrarSessionData(); formEmptyState = new EmptyFormState(this); baseInformationComplete = new BasicInfoCompleteState(this); State = formEmptyState; } public IState State { get; set; } public void SubmitData(RegistrarSessionData data) { State.SubmitData(data); } public void ProceedToNextStep() { State.ProceedToNextStep(); } } //actual data stored in the session //to be populated by page public class RegistrarSessionData { public string FirstName { get; set; } public string LastName { get; set; } //will include values of all 4 forms } //State Interface public interface IState { void SubmitData(RegistrarSessionData data); void ProceedToNextStep(); } //Concrete implementation of IState //Beginning state - no data public class EmptyFormState : IState { private readonly Registrar registrar; public EmptyFormState(Registrar registrar) { this.registrar = registrar; } public void SubmitData(RegistrarSessionData data) { //Should Validation occur here? //Should each state object contain a validation class? (IValidator ?) //Should this throw an exception? } public void ProceedToNextStep() { registrar.State = new BasicInfoCompleteState(registrar); } } //Next step, will have 4 in total public class BasicInfoCompleteState : IState { private readonly Registrar registrar; public BasicInfoCompleteState(Registrar registrar) { this.registrar = registrar; } public void SubmitData(RegistrarSessionData data) { //etc } public void ProceedToNextStep() { //etc } } }

    Read the article

  • Skip makefile dependency generation for certain targets (e.g. `clean`)

    - by Shtééf
    I have several C and C++ projects that all follow a basic structure I've been using for a while now. My source files go in src/*.c, intermediate files in obj/*.[do], and the actual executable in the top level directory. My makefiles follow roughly this template: # The final executable TARGET := something # Source files (without src/) INPUTS := foo.c bar.c baz.c # OBJECTS will contain: obj/foo.o obj/bar.o obj/baz.o OBJECTS := $(INPUTS:%.cpp=obj/%.o) # DEPFILES will contain: obj/foo.d obj/bar.d obj/baz.d DEPFILES := $(OBJECTS:%.o=%.d) all: $(TARGET) obj/%.o: src/%.cpp $(CC) $(CFLAGS) -c -o $@ $< obj/%.d: src/%.cpp $(CC) $(CFLAGS) -M -MF $@ -MT $(@:%.d=%.o) $< $(TARGET): $(OBJECTS) $(LD) $(LDFLAGS) -o $@ $(OBJECTS) .PHONY: clean clean: -rm -f $(OBJECTS) $(DEPFILES) $(RPOFILES) $(TARGET) -include $(DEPFILES) Now I'm at the point where I'm packaging this for a Debian system. I'm using debuild to build the Debian source package, and pbuilder to build the binary package. The debuild step only has to execute the clean target, but even this causes the dependency files to be generated and included. In short, my question is really: Can I somehow prevent make from generating dependencies when all I want is to run the clean target?

    Read the article

  • Drupal workflow action access integrated with taxonomy access control?

    - by groovehunter
    hi, I am building a DMS for our intranet and use a taxonomy hierarchy because we need access control that way. All company locations manage (upload,edit) their own documents but should be able to access all. This is inherited to the child terms and works fine. Additionally we want simple 3-step workflow (draft,published,archived). So i introduced roles for editor, publisher and docadmin and set permissions for the transitions. Also triggers to effectivly (un)publish documents. But (of course) a user of role publisher can do the transition for ALL documents. But we want publisher for each company location (top taxonomy level, see above). Could this be achieved? Do i have to set it up by myself (i guess "rules" is appropriate to do this) or is there another module helping. role inheritance was a guess, but that is only about roles (naturally). "module grants" i use and checked first option. That way my thoughts are going. I hope you get my idea resp. problem. drupal 6.16 current edit: I reread the docs and found ie. http://drupal.org/node/408018 Revisioning for categorized content. Will reread that.

    Read the article

  • "do it all" page structure and things to watch out for?

    - by Andrew Heath
    I'm still getting my feet wet in PHP (my 1st language) and I've reached the competency level where I can code one page that handles all sorts of different related requests. They generally have a structure like this: (psuedo code) <?php include 'include/functions.php'; IF authorized IF submit (add data) ELSE IF update (update data) ELSE IF list (show special data) ELSE IF tab switch (show new area) ELSE display vanilla (show default) ELSE "must be registered/logged-in" ?> <HTML> // snip <?php echo $output; ?> // snip </HTML> and it all works nicely, and quite quickly which is cool. But I'm still sorta feeling my way in the dark... and would like some input from the pros regarding this type of page design... is it a good long-term structure? (it seems easily expanded...) are there security risks particular to this design? are there corners I should avoid painting myself into? Just curious about what lies ahead, really...

    Read the article

< Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >