Search Results

Search found 16838 results on 674 pages for 'writing patterns dita cms'.

Page 561/674 | < Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >

  • protecting grails melody with grails filter

    - by batmannavneet
    I have an application where I am using spring security along with grails melody. I am planning to run grails melody in production environment, but don't want visitors to have access to it. How should I achieve that ? I tried creating a filter in grails (just showing the sample of what I am trying, not the actual code)- def filters = { allURIs(uri:'/**') { before = { //... if(request.forwardURI.indexOf("admin") != -1 || request.forwardURI.indexOf("monitoring") != -1) { response.sendError 404 return false } } } } But this doesnt work as the request for "monitoring" doesnt hit this filter. I dont even want the user to know that such a URL exists, so I want to check in the filter that if "monitoring" is the URL, I show the 404 error page. Thats also the reason why I dont want to protect this URL with spring security as it will show "access denied" page. Basically I want the URL to exist but they should be invisible to users. I want the access to be open to only certain IP addresses for these special URLs. On another note, Is it possible to write a grails filter that "acts" before the spring security filter is hit ? I want to be able to do some filtering before I forward requests to spring security. Writing a grails filter like above doesnt help. Spring security filter gets hit first if I access a protected resource and this filter doesn't get called. Thanks

    Read the article

  • Generating JavaScript in C# and subsequent testing

    - by Codebrain
    We are currently developing an ASP.NET MVC application which makes heavy use of attribute-based metadata to drive the generation of JavaScript. Below is a sample of the type of methods we are writing: function string GetJavascript<T>(string javascriptPresentationFunctionName, string inputId, T model) { return @"function updateFormInputs(value){ $('#" + inputId + @"_SelectedItemState').val(value); $('#" + inputId + @"_Presentation').val(value); } function clearInputs(){ " + helper.ClearHiddenInputs<T>(model) + @" updateFormInputs(''); } function handleJson(json){ clearInputs(); " + helper.UpdateHiddenInputsWithJson<T>("json", model) + @" updateFormInputs(" + javascriptPresentationFunctionName + @"()); " + model.GetCallBackFunctionForJavascript("json") + @" }"; } This method generates some boilerplace and hands off to various other methods which return strings. The whole lot is then returned as a string and written to the output. The question(s) I have are: 1) Is there a nicer way to do this other than using large string blocks? We've considered using a StringBuilder or the Response Stream but it seems quite 'noisy'. Using string.format starts to become difficult to comprehend. 2) How would you go about unit testing this code? It seems a little amateur just doing a string comparison looking for particular output in the string. 3) What about actually testing the eventual JavaScript output? Thanks for your input!

    Read the article

  • LinqKit stack overflow exception using predicate builder

    - by MLynn
    I am writing an application in C# using LINQ and LINQKit. I have a very large database table with company registration numbers in it. I want to do a LINQ query which will produce the equivalent SQL: select * from table1 where regno in('123','456') The 'in' clause may have thousands of terms. First I get the company registration numbers from a field such as Country. I then add all the company registration numbers to a predicate: var predicate = PredicateExtensions.False<table2>(); if (RegNos != null) { foreach (int searchTerm in RegNos) { int temp = searchTerm; predicate = predicate.Or(ec => ec.regno.Equals(temp)); } } On Windows Vista Professional a stack overflow exception occured after 4063 terms were added. On Windows Server 2003 a stack overflow exception occured after about 1000 terms were added. I had to solve this problem quickly for a demo. To solve the problem I used this notation: var predicate = PredicateExtensions.False<table2>(); if (RegNosDistinct != null) { predicate = predicate.Or(ec => RegNos.Contains(ec.regno)); } My questions are: Why does a stack overflow occur using the foreach loop? I take it Windows Server 2003 has a much smaller stack per process\thread than NT\2000\XP\Vista\Windows 7 workstation versions of Windows. Which is the fastest and most correct way to achieve this using LINQ and LINQKit? It was suggested I stop using LINQ and go back to dynamic SQL or ADO.NET but I think using LINQ and LINQKit is far better for maintainability.

    Read the article

  • Swing: what to do when a JTree update takes too long and freezes other GUI elements?

    - by java.is.for.desktop
    Hello, everyone! I know that GUI code in Java Swing must be put inside SwingUtilities.invokeAndWait or SwingUtilities.invokeLater. This way threading works fine. Sadly, in my situation, the GUI update it that thing which takes much longer than background thread(s). More specific: I update a JTree with about just 400 entries, nesting depth is maximum 4, so should be nothing scary, right? But it takes sometimes one second! I need to ensure that the user is able to type in a JTextPane without delays. Well, guess what, the slow JTree updates do cause delays for JTextPane during input. It refreshes only as soon as the tree gets updated. I am using Netbeans and know empirically that a Java app can update lots of information without freezing the rest of the UI. How can it be done? NOTE 1: All those DefaultMutableTreeNodes are prepared outside the invokeAndWait. NOTE 2: When I replace invokeAndWait with invokeLater the tree doesn't get updated. NOTE 3: Fond out that recursive tree expansion takes far the most time. NOTE 4: I'm using custom tree cell renderer, will try without and report. NOTE 4a: My tree cell renderer uses a map to cache and reuse created JTextComponents, depending on tree node (as a key). CLUE 1: Wow! Without setting custom cell renderer it's 10 times faster. I think, I'll need few good tutorials on writing custom tree cell renderers.

    Read the article

  • Putting a C++ Vector as a Member in a Class that Uses a Memory Pool

    - by Deep-B
    Hey, I've been writing a multi-threaded DLL for database access using ADO/ODBC for use with a legacy application. I need to keep multiple database connections for each thread, so I've put the ADO objects for each connection in an object and thinking of keeping an array of them inside a custom threadInfo object. Obviously a vector would serve better here - I need to delete/rearrange objects on the go and a vector would simplify that. Problem is, I'm allocating a heap for each thread to avoid heap contention and stuff and allocating all my memory from there. So my question is: how do I make the vector allocate from the thread-specific heap? (Or would it know internally to allocate memory from the same heap as its wrapper class - sounds unlikely, but I'm not a C++ guy) I've googled a bit and it looks like I might need to write an allocator or something - which looks like so much of work I don't want. Is there any other way? I've heard vector uses placement-new for all its stuff inside, so can overloading operator new be worked into it? My scant knowledge of the insides of C++ doesn't help, seeing as I'm mainly a C programmer (even that - relatively). It's very possible I'm missing something elementary somewhere. If nothing easier comes up - I might just go and do the array thing, but hopefully it won't come to that. I'm using MS-VC++ 6.0 (hey, it's rude to laugh! :-P ). Any/all help will be much appreciated.

    Read the article

  • Orthogonal projection and texture coordinates in opengl

    - by knuck
    I'm writing a 2D game in Opengl. I already set up the orthogonal projection so I can easily know where a quad will end up on screen. The problem is, I also want to be able to map pixels directly to texture coords, so I also applied an orthogonal transformation (using gluOrtho2d) to the texture. Now I can map pixels directly using integers and glTexCoord2i. The thing is, after googling/reading/asking, I found out no one really knows (apparently) the behavior of glTexCoord2i, but it works just fine the way I'm using. Some sample test code I wrote follows: glBegin(GL_QUADS); glTexCoord2i(16,0); glVertex2f(X, Y); glTexCoord2i(16,16); glVertex2f(X, Y+32); glTexCoord2i(32, 16); glVertex2f(X+32, Y+32); glTexCoord2i(32, 0); glVertex2f(X+32, Y); glEnd(); So, is there any problem with what I'm doing, or is what I'm doing correct?

    Read the article

  • Is using YIELD a read-only way to return a collection?

    - by Eric
    I'm writing an interface which has a collection property which I want to be read only. I don't want users of the interface to be able to modify the collection. The typical suggestion I've found for creating a read only collection property is to set the type of the property to IEnumerable like this: private List<string> _mylist; public IEnumerable<string> MyList { get { return this._mylist; } } Yet this does not prevent the user from casting the IEnumerable back to a List and modifying it. If I use a Yield keyword instead of returning _mylist directly would this prevent users of my interface from being able to modify the collection. I think so because then I'm only returning the objects one by one, and not the actual collection. private List<string> _mylist; public IEnumerable<string> MyList { get { foreach(string str in this._mylist) { yield return str; } } }

    Read the article

  • SQL: script to create country, state tables

    - by pcampbell
    Consider writing an application that requires registration for an entity, and the schema has been defined to require the country, state/prov/county data to be normalized. This is fairly typical stuff here. Naming also is important to reflect. Each country has a different name for this entity: USA = states Australia = states + territories Canada = provinces + territories Mexico = states Brazil = states Sweden = provinces UK = counties, principalities, and perhaps more! Most times when approaching this problem, I have to scratch together a list of good countries, and the states/prov/counties of each. The app may be concerned with a few countries and not others. The process is full of pain. It typically involves one of two approaches: opening up some previous DB and creating a CREATE script based on those tables. Run that script in the context of the new system. creating a DTS package from database1 to database2, with all the DDL and data included in the transfer. My goal now is to script the creation and insert of the countries that I'd be concerned with in the app of the day. When I want to roll out Countries X/Y/Z, I'll open CountryX.sql, and load its states into the ProvState table. Question: do you have a set of scripts in your toolset to create schema and data for countries and state/province/county? If so, would you share your scripts here? (U.K. citizens, please feel free to correct me by way of a comment in the use of counties.)

    Read the article

  • Executing sequential stored procedures; works in query analyzer, doesn't in my .NET application

    - by evanmortland
    Hello, I have an audit record table that I am writing to. I am connecting to MyDb, which has a stored procedure called 'CreateAudit', which is a passthrough stored procedure to another database on the same machine called MyOther DB with a stored procedure called 'CreatedAudit' as well. In other words in MyDB I have CreateAudit, which does the following EXEC dbo.MyOtherDB.CreateAudit. I call the MyDb CreateAudit stored procedure from my application, using subsonic as the DAL. The first time I call it, I call it with the following (pseudocode): Result = CreateAudit(recordId, "Opened") One line after that, I call: Result2 = CreateAudit(recordId, "Closed") In my second stored procedure it is supposed to mark the record that was created by the CreateAudit(recordId, "Opened") with a status of closed. It works great if I run them independently of one another, but when they run in sequence in the application, the record is not marked as "Closed". When I run SQL profiler I see that both queries ran, and if I copy the queries out and run them from query analyzer the record gets marked as closed 100% of the time! When I run it from the application, about once every 20 times or so, the record is successfully marked closed - the other 19 times nothing happens, but I do not get an error! Is it possible for the .NET app to skip over the ouput from the first stored procedure and start executing the second stored procedure before the record in the first is created? When I add a "WAITFOR DELAY '00:00:00:003'" to the top of my stored procedure, the record is also closed 100% of the time. My head is spinning, any ideas why this is happening! Thanks for any responses, very interested in hearing how this can happen.

    Read the article

  • Display post data !

    - by Comii
    I am trying to post data from vb.net application to web service asmx that is located on server! For posting data from vb.net application I am using this code: Public Function Post(ByVal url As String, ByVal data As String) As String Dim vystup As String = Nothing Try 'Our postvars Dim buffer As Byte() = Encoding.ASCII.GetBytes(data) 'Initialisation, we use localhost, change if appliable Dim WebReq As HttpWebRequest = DirectCast(WebRequest.Create(url), HttpWebRequest) 'Our method is post, otherwise the buffer (postvars) would be useless WebReq.Method = "POST" 'We use form contentType, for the postvars. WebReq.ContentType = "application/x-www-form-urlencoded" 'The length of the buffer (postvars) is used as contentlength. WebReq.ContentLength = buffer.Length 'We open a stream for writing the postvars Dim PostData As Stream = WebReq.GetRequestStream() 'Now we write, and afterwards, we close. Closing is always important! PostData.Write(buffer, 0, buffer.Length) PostData.Close() 'Get the response handle, we have no true response yet! Dim WebResp As HttpWebResponse = DirectCast(WebReq.GetResponse(), HttpWebResponse) 'Let's show some information about the response Console.WriteLine(WebResp.StatusCode) Console.WriteLine(WebResp.Server) 'Now, we read the response (the string), and output it. Dim Answer As Stream = WebResp.GetResponseStream() Dim _Answer As New StreamReader(Answer) 'Congratulations, you just requested your first POST page, you 'can now start logging into most login forms, with your application 'Or other examples. vystup = _Answer.ReadToEnd() Catch ex As Exception MessageBox.Show(ex.Message) End Try Return vystup.Trim() & vbLf End Function Now how i can retrieve this data in asmx service?

    Read the article

  • What is the best way to auto-generate INSERT statements for a SQL Server table?

    - by JosephStyons
    We are writing a new application, and while testing, we will need a bunch of dummy data. I've added that data by using MS Access to dump excel files into the relevant tables. Every so often, we want to "refresh" the relevant tables, which means dropping them all, re-creating them, and running a saved MS Access append query. The first part (dropping & re-creating) is an easy sql script, but the last part makes me cringe. I want a single setup script that has a bunch of INSERTs to regenerate the dummy data. I have the data in the tables now. What is the best way to automatically generate a big list of INSERT statements from that dataset? I'm thinking of something like in TOAD (for Oracle) where you can right-click on a grid and click Save As-Insert Statements, and it will just dump a big sql script wherever you want. The only way I can think of doing it is to save the table to an excel sheet and then write an excel formula to create an INSERT for every row, which is surely not the best way. I'm using the 2008 Management Studio to connect to a SQL Server 2005 database.

    Read the article

  • Reasons for & against a Database

    - by dbemerlin
    Hi, i had a discussion with a coworker about the architecture of a program i'm writing and i'd like some more opinions. The Situation: The Program should update at near-realtime (+/- 1 Minute). It involves the movement of objects on a coordinate system. There are some events that occur at regular intervals (i.e. creation of the objects). Movements can change at any time through user input. My solution was: Build a server that runs continously and stores the data internally. The server dumps a state-of-the-program at regular intervals to protect against powerfailures and/or crashes. He argued that the program requires a Database and i should use cronjobs to update the data. I can store movement information by storing startpoint, endpoint and speed and update the position in the cronjob (and calculate collisions with other objects there) by calculating direction and speed. His reasons: Requires more CPU & Memory because it runs constantly. Powerfailures/Crashes might destroy data. Databases are faster. My reasons against this are mostly: Not very precise as events can only occur at full minutes (wouldn't be that bad though). Requires (possibly costly) transformation of data on every run from relational data to objects. RDBMS are a general solution for a specialized problem so a specialized solution should be more efficient. Powerfailures (or other crashes) can leave the Data in an undefined state with only partially updated data unless (possibly costly) precautions (like transactions) are taken. What are your opinions about that? Which arguments can you add for any side?

    Read the article

  • How to configure database connection securely

    - by chiccodoro
    Similar but not the same: How to securely store database connection details Securely connecting to database within a application Hi all, I have a C# WinForms application connecting to a database server. The database connection string, including a generic user/pass, is placed in a NHibernate configuration file, which lies in the same directory as the exe file. Now I have this issue: The user that runs the application should not get to know the username/password of the general database user because I don't want him to rummage around in the database directly. Alternatively I could hardcode the connection string, which is bad because the administrator must be able to change it if the database is moved or if he wants to switch between dev/test/prod environments. So long I've found three possibilities: The first referenced question was generally answered by making the file only readable for the user that runs the application. But that's not not enough in my case (the user running the application is a person. The database user/pass are general and shouldn't even be accessible by the person.) The first answer additionally proposed to encrypt the connection data before writing it to the file. With this approach, the administrator is not able anymore to configure the connection string because he cannot encrypt it by hand. The second referenced question provides an approach for this very scenario but it seems very complicated. My questions to you: This is a very general issue, so isn't there any general "how-to-do-it" way, somehow a "design pattern"? Is there some support in .NET's config infrastructure? (optional, maybe out of scope) Can I combine that easily with the NHibernate configuration mechanism?

    Read the article

  • Cocoa-Created PDF Not Rendering Correctly

    - by Matthew Roberts
    I've created a PDF on the iPad, but the problem is when you have a line of text greater than 1 line, the content just goes off the page. This is my code: void CreatePDFFile (CGRect pageRect, const char *filename) { CGContextRef pdfContext; CFStringRef path; CFURLRef url; CFMutableDictionaryRef myDictionary = NULL; path = CFStringCreateWithCString (NULL, filename, kCFStringEncodingUTF8); url = CFURLCreateWithFileSystemPath (NULL, path, kCFURLPOSIXPathStyle, 0); CFRelease (path); myDictionary = CFDictionaryCreateMutable(NULL, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); NSString *foos = @"Title"; const char *text = [foos UTF8String]; CFDictionarySetValue(myDictionary, kCGPDFContextTitle, CFSTR(text)); CFDictionarySetValue(myDictionary, kCGPDFContextCreator, CFSTR("Author")); pdfContext = CGPDFContextCreateWithURL (url, &pageRect, myDictionary); CFRelease(myDictionary); CFRelease(url); CGContextBeginPage (pdfContext, &pageRect); CGContextSelectFont (pdfContext, "Helvetica", 12, kCGEncodingMacRoman); CGContextSetTextDrawingMode (pdfContext, kCGTextFill); CGContextSetRGBFillColor (pdfContext, 0, 0, 0, 1); NSString *body = @"text goes here"; const char *text = [body UTF8String]; CGContextShowTextAtPoint (pdfContext, 30, 750, text, strlen(text)); CGContextEndPage (pdfContext); CGContextRelease (pdfContext);} Now the issue lies within me writing the text to the page, but the problem is that I can't seem to specify a multi-line "text view".

    Read the article

  • I have two choices of Master's classes this fall. Which is the most useful?

    - by ahplummer
    (For background purposes and context): I am a Software Engineer, and manage other Software Engineers currently. I kind of wear two hats right now: one of a programmer, and one as a 'team lead'. In this regard, I've started going back to school to get my Master's degree with an emphasis in Computer Science. I already have a Bachelor's in Computer Science, and have been working in the field for about 13 years. Our primary development environment is a Windows environment, writing in .NET, Delphi, and SQL Server. Choice #1: CST 798 DATA VISUALIZATION Course Description: Basically, this is a course on the "Processing" language: http://processing.org/ Choice #2: CST 711 INFORMATICS Course Description: (From catalog): Informatics is the science of the use and processing of data, information, and knowledge. This course covers a variety of applied issues from information technology, information management at a variety of levels, ranging from simple data entry, to the creation, design and implementation of new information systems, to the development of models. Topics include basic information representation, processing, searching, and organization, evaluation and analysis of information, Internet-based information access tools, ethics and economics of information sharing.

    Read the article

  • write html content from javascript

    - by Nikita Rybak
    There's one thing I want to do with javascript, but don't know how. In a perfect world it would look like this: <p>My very cool page!</p> <script type="text/javascript"> document.write('<div>some content</div>'); </script> And this script would insert <div>some content</div> right before (or after, or instead of) script tag. But in the real world, document.write starts writing html anew, removing any static content in the page (<p> tag, in this case). This is simplified example, but should give you the idea. I know that I can statically put <div id="some_id"></div> before script tag and insert html in it from js, but I wanna be able to use multiple instances of this snippet without changing it (generating random id manually) each time. I'm ok to use jquery or any other existing library as well. Is there any way to achieve this? Thanks!

    Read the article

  • How to html encode the output of an NHaml view (or any MVC view)?

    - by jessegavin
    I have several views written in NHaml that I would like to render as encoded html. Here's one. %table.data %thead %tr %th Country Name %th ISO 2 %th ISO 3 %th ISO # %tbody - foreach(var c in ViewData.Model.Countries) %tr %td =c.Name %td =c.Alpha2 %td =c.Alpha3 %td =c.Number I know that NHaml provides syntax to Html encode the output for given lines using &=. However, in order to encode the entire view, I would essentially lose the benefit of writing my view in NHaml since it would have to look like this.... &= "<table class='data'> &= " <thead> So I was wondering if there was any cool way to be able to capture the rendered view as a string, then to html encode that string. Maybe something like the following??? public ContentResult HtmlTable(string format) { var m = new CountryViewModel(); m.Countries = _countryService.GetAll(); // Somehow render the view and store it as a string? // Not sure how to achieve this. var viewHtml = View("HtmlTable", m); // ??? return Content(viewHtml); } This question may actually have no particular relevance to the View engine that I am using I guess. Any help or thoughts would be appreciated.

    Read the article

  • rails application on production not working

    - by Steven
    i have a rails application on production which is running using mongrel, I can successfully start the mogrel for the application but when i try to access the application on the URL it is not responding... it is just hanging. This is the mongrel log... but when I hit xxx.xxx.xxx.xx:3001 it is not showing the website but on developent is working fine. ** Starting Mongrel listening at 0.0.0.0:3001 ** Initiating groups for "name.co.za":"name.co.za". ** Changing group to "name.co.za". ** Changing user to "name.co.za". ** Starting Rails with production environment... ** Rails loaded. ** Loading any Rails specific GemPlugins ** Signals ready. TERM = stop. USR2 = restart. INT = stop (no restart). ** Rails signals registered. HUP = reload (without restart). It might not work well. ** Mongrel 1.1.5 available at 0.0.0.0:3001 ** Writing PID file to /home/name.co.za/shared/log/mongrel.pid

    Read the article

  • Preferred way of application initialization

    - by lisak
    Do you guys have your own little framework for project startups ? I mean, every time one needs to do the same things at the beginning: Context initialization - ideally after arguments are processed. Sometimes without interactive user input, sometimes with input reader. Sometimes we need to load properties, sometimes not. Then we need to get a class out of context and run its method. Programming....programming until writing shell script to place everything on classpath. It's true that it differs according to the actual needs. But it seems to me, that I'm doing always almost the same, again and again from the scratch. Sometimes I realize that I'm postponing my work just because I don't want to do these annoying startups. It would be great if there was some kind of universal Main class doing reflection to specified bean, context initialization, argument parsing, interactive user input reading and have the programmer do the important things...All setup might be done via spring configuration. I think I'll have to do it by myself. I'd appreciate your ideas

    Read the article

  • Vim, LaTeX, word-wrapping, and version control

    - by Bkkbrad
    I'm writing a LaTeX document in vim, and I have it hard wrapping at 80 characters to make reading easier. However, this causes problems with tracking changes with in version control. For example, inserting "Lorem ipsum" at the beginning of this text: 1 Dolor sit amet, consectetur adipiscing elit. Phasellus bibendum lobortis lectus 2 quis porta. Aenean vestibulum magna vel purus laoreet at molestie massa 3 suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien 4 ullamcorper elit, dignissim consectetur justo tellus et nunc. results in: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. Phasellus bibendum 2 lobortis lectus quis porta. Aenean vestibulum magna vel purus laoreet at 3 molestie massa suscipit. Vestibulum vestibulum, mauris nec convallis ultrices, 4 tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. When I review this change in git, it tells me that all the lines of the paragraph have changed because of the wrapping, even though only one semantic change has occurred. One way around this problem is to have every sentence on its own line. This looks the same in the rendered document, but the source now is harder to read, because each line has quite a different line length: 1 Lorum ipsum dolor sit amet, consectetur adipiscing elit. 2 Phasellus bibendum lobortis lectus quis porta. 3 Aenean vestibulum magna vel purus laoreet at molestie massa suscipit. 4 Vestibulum vestibulum, mauris nec convallis ultrices, tellus sapien ullamcorper elit, dignissim consectetur justo tellus et nunc. (If I soft wrap at 80, things still look bad, just in a different way.) Is it possible to have my text on disk with one newline per sentence, but display and edit it in vim as if the text of each paragraph was one long line, soft wrapped at 80 characters? I assume it requires some vim-foo rather than tweaking git or LaTeX.

    Read the article

  • standard way to perform a clean shutdown with Boost.Asio

    - by Timothy003
    I'm writing a cross-platform server program in C++ using Boost.Asio. Following the HTTP Server example on this page, I'd like to handle a user termination request without using implementation-specific APIs. I've initially attempted to use the standard C signal library, but have been unable to find a design pattern suitable for Asio. The Windows example's design seems to resemble the signal library closest, but there's a race condition where the console ctrl handler could be called after the server object has been destroyed. I'm trying to avoid race conditions that cause undefined behavior as specified by the C++ standard. Is there a standard (correct) way to stop the server? So far: #include <csignal> #include <functional> #include <boost/asio.hpp> using std::signal; using boost::asio::io_service; extern "C" { static void handle_signal(int); } namespace { std::function<void ()> sighandler; } void handle_signal(int) { sighandler(); } int main() { io_service s; sighandler = std::bind(&io_service::stop, &s); auto res = signal(SIGINT, &handle_signal); // race condition? SIGINT raised before I could set ignore back if (res == SIG_IGN) signal(SIGINT, SIG_IGN); res = signal(SIGTERM, &handle_signal); // race condition? SIGTERM raised before I could set ignore back if (res == SIG_IGN) signal(SIGTERM, SIG_IGN); s.run(); // reset signals signal(SIGTERM, SIG_DFL); signal(SIGINT, SIG_DFL); // is it defined whether handle_signal can still be in execution at this // point? sighandler = nullptr; }

    Read the article

  • How do I create a good evaluation function for a new board game?

    - by A. Rex
    I write programs to play board game variants sometimes. The basic strategy is standard alpha-beta pruning or similar searches, sometimes augmented by the usual approaches to endgames or openings. I've mostly played around with chess variants, so when it comes time to pick my evaluation function, I use a basic chess evaluation function. However, now I am writing a program to play a completely new board game. How do I choose a good or even decent evaluation function? The main challenges are that the same pieces are always on the board, so a usual material function won't change based on position, and the game has been played less than a thousand times or so, so humans don't necessarily play it enough well yet to give insight. (PS. I considered a MoGo approach, but random games aren't likely to terminate.) Any ideas? Game details: The game is played on a 10-by-10 board with a fixed six pieces per side. The pieces have certain movement rules, and interact in certain ways, but no piece is ever captured. The goal of the game is to have enough of your pieces in certain special squares on the board. The goal of the computer program is to provide a player which is competitive with or better than current human players.

    Read the article

  • How to distinguish between two different UDP clients on the same IP address?

    - by Ricket
    I'm writing a UDP server, which is a first for me; I've only done a bit of TCP communications. And I'm having trouble figuring out exactly how to distinguish which user is which, since UDP deals only with packets rather than connections and I therefore cannot tell exactly who I'm communicating with. Here is pseudocode of my current server loop: DatagramPacket p; socket.receive(p); // now p contains the user's IP and port, and the data int key = getKey(p); if(key == 0) { // connection request key = makeKey(p); clients.add(key, p.ip); send(p.ip, p.port, key); // give the user his key } else { // user has a key // verify key belongs to that IP address // lookup the user's session data based on the key // react to the packet in the context of the session } When designing this, I kept in mind these points: Multiple users may exist on the same IP address, due to the presence of routers, therefore users must have a separate identification key. Packets can be spoofed, so the key should be checked against its original IP address and ignored if a different IP tries to use the key. The outbound port on the client side might change among packets. Is that third assumption correct, or can I simply assume that one user = one IP+port combination? Is this commonly done, or should I continue to create a special key like I am currently doing? I'm not completely clear on how TCP negotiates a connection so if you think I should model it off of TCP then please link me to a good tutorial or something on TCP's SYN/SYNACK/ACK mess. Also note, I do have a provision to resend a key, if an IP sends a 0 and that IP already has a pending key; I omitted it to keep the snippet simple. I understand that UDP is not guaranteed to arrive, and I plan to add reliability to the main packet handling code later as well.

    Read the article

  • Compiled Haskell libraries with FFI imports are invalid when imported into GHCI

    - by John Millikin
    I am using GHC 6.12.1, in Ubuntu 10.04 When I try to use the FFI syntax for static storage, only modules running in interpreted mode (ie GHCI) work properly. Compiled modules have invalid pointers, and do not work. I'd like to know whether anybody can reproduce the problem, whether this an error in my code or GHC, and (if the latter) whether it's a known issue. I'm using sys_siglist because it's present in a standard library on my system, but I don't believe the actual storage used matters (I discovered this while writing a binding to libidn). If it helps, sys_siglist is defined in <signal.h> as: extern __const char *__const sys_siglist[_NSIG]; I thought this type might be the problem, so I also tried wrapping it in a plain C procedure: #include<stdio.h> const char **test_ffi_import() { printf("C think sys_siglist = %X\n", sys_siglist); return sys_siglist; } However, importing that doesn't change the result, and the printf() call prints the same pointer value as show siglist_a. My suspicion is that it's something to do with static and dynamic library loading. Update: somebody in #haskell suggested this might be 64-bit specific; if anybody tries to reproduce it, can you mention your architecture and whether it worked in a comment? Code as follows: -- A.hs {-# LANGUAGE ForeignFunctionInterface #-} module A where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_a :: Ptr CString -- -- B.hs {-# LANGUAGE ForeignFunctionInterface #-} module B where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_b :: Ptr CString -- -- Main.hs {-# LANGUAGE ForeignFunctionInterface #-} module Main where import Foreign import Foreign.C import A import B foreign import ccall "&sys_siglist" siglist_main :: Ptr CString main = do putStrLn $ "siglist_a = " ++ show siglist_a putStrLn $ "siglist_b = " ++ show siglist_b putStrLn $ "siglist_main = " ++ show siglist_main peekSiglist "a " siglist_a peekSiglist "b " siglist_b peekSiglist "main" siglist_main peekSiglist name siglist = do ptr <- peekElemOff siglist 2 str <- maybePeek peekCString ptr putStrLn $ "siglist_" ++ name ++ "[2] = " ++ show str I would expect something like this output, where all pointer values identical and valid: $ runhaskell Main.hs siglist_a = 0x00007f53a948fe00 siglist_b = 0x00007f53a948fe00 siglist_main = 0x00007f53a948fe00 siglist_a [2] = Just "Interrupt" siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt" However, if I compile A.hs (with ghc -c A.hs), then the output changes to: $ runhaskell Main.hs siglist_a = 0x0000000040378918 siglist_b = 0x00007fe7c029ce00 siglist_main = 0x00007fe7c029ce00 siglist_a [2] = Nothing siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt"

    Read the article

  • Perf4J Graph Output from Log File

    - by manyxcxi
    I currently have a long running process that I am trying to analyze with Perf4J. I currently have it writing results in CSV format to its own log file using the AsyncCoalescingStatisticsAppender and a StatisticsCsvLayout on the file appender. My question is; when I try and use the --graph option from the command line (using the perf4j jar) it isn't populating the data points- it isn't populating anything. Are my appenders set incorrectly? The log file contains hundreds (sometimes thousands) of data points of about 10 different tag names. <appender name="perfAppender" class="org.apache.log4j.FileAppender"> <param name="File" value="perfStats.log"/> <layout class="org.perf4j.log4j.StatisticsCsvLayout"> </layout> </appender> <appender name="CoalescingStatistics" class="org.perf4j.log4j.AsyncCoalescingStatisticsAppender"> <!-- The TimeSlice option is used to determine the time window for which all received StopWatch logs are aggregated to create a single GroupedTimingStatistics log. Here we set it to 10 seconds, overriding the default of 30000 ms --> <param name="TimeSlice" value="10000"/> <appender-ref ref="ConsoleAppender"/> <appender-ref ref="CompositeRollingFileAppender"/> <appender-ref ref="perfAppender"/> </appender>

    Read the article

< Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >