Search Results

Search found 16838 results on 674 pages for 'writing patterns dita cms'.

Page 579/674 | < Previous Page | 575 576 577 578 579 580 581 582 583 584 585 586  | Next Page >

  • TicTacToe strategic reduction

    - by NickLarsen
    I decided to write a small program that solves TicTacToe in order to try out the effect of some pruning techniques on a trivial game. The full game tree using minimax to solve it only ends up with 549,946 possible games. With alpha-beta pruning, the number of states required to evaluate was reduced to 18,297. Then I applied a transposition table that brings the number down to 2,592. Now I want to see how low that number can go. The next enhancement I want to apply is a strategic reduction. The basic idea is to combine states that have equivalent strategic value. For instance, on the first move, if X plays first, there is nothing strategically different (assuming your opponent plays optimally) about choosing one corner instead of another. In the same situation, the same is true of the center of the walls of the board, and the center is also significant. By reducing to significant states only, you end up with only 3 states for evaluation on the first move instead of 9. This technique should be very useful since it prunes states near the top of the game tree. This idea came from the GameShrink method created by a group at CMU, only I am trying to avoid writing the general form, and just doing what is needed to apply the technique to TicTacToe. In order to achieve this, I modified my hash function (for the transposition table) to enumerate all strategically equivalent positions (using rotation and flipping functions), and to only return the lowest of the values for each board. Unfortunately now my program thinks X can force a win in 5 moves from an empty board when going first. After a long debugging session, it became apparent to me the program was always returning the move for the lowest strategically significant move (I store the last move in the transposition table as part of my state). Is there a better way I can go about adding this feature, or a simple method for determining the correct move applicable to the current situation with what I have already done?

    Read the article

  • How can I have 2 ADO access methods use the same Transaction?

    - by KevinDeus
    I'm writing a test to see if my LINQ to Entity statement works.. I'll be using this for others if I can get this concept going.. my intention here is to INSERT a record with ADO, then verify it can be queried with LINQ, and then ROLLBACK the whole thing at the end. I'm using ADO to insert because I don't want to use the object or the entity model that I am testing. I figure that a plain ADO INSERT should do fine. problem is.. they both use different types of connections. is it possible to have these 2 different data access methods use the same TRANSACTION so I can roll it back?? _conn = new SqlConnection(_connectionString); _conn.Open(); _trans = _conn.BeginTransaction(); var x = new SqlCommand("INSERT INTO Table1(ID, LastName, FirstName, DateOfBirth) values('127', 'test2', 'user', '2-12-1939');", _conn); x.ExecuteNonQuery(); //So far, so good. Adding a record to the table. //at this point, we need to do **_trans.Commit()** here because our Entity code can't use the same connection. Then I have to manually delete in the TestHarness.TearDown.. I'd like to eliminate this step //(this code is in another object, I'll include it for brevity. Imagine that I passed the connection in) //check to see if it is there using (var ctx = new XEntities(_conn)) //can't do this.. _conn is not an EntityConnection! { var retVal = (from m in ctx.Table1 where m.first_name == "test2" where m.last_name == "user" where m.Date_of_Birth == "2-12-1939" where m.ID == 127 select m).FirstOrDefault(); return (retVal != null); } //Do test.. Assert.BlahBlah(); _trans.Rollback();

    Read the article

  • how do i insert html file that have hebrew text and read it back in sql server 2008 using the filest

    - by gadym
    hello all, i am new to the filestream option in sql server 2008, but i have already understand how to open this option and how to create a table that allow you to save files. let say my table contains: id,name, filecontent i tried to insert an html file (that has hebrew chars/text in it) to this table. i'm writing in asp.net (c#), using visual studio 2008. but when i tried to read back the content , hebrew char becomes '?'. the actions i took were: 1. i read the file like this: // open the stream reader System.IO.StreamReader aFile = new StreamReader(FileName, System.Text.UTF8Encoding.UTF8); // reads the file to the end stream = aFile.ReadToEnd(); // closes the file aFile.Close(); return stream; // returns the stream i inserted the 'stream' to the filecontent column as binary data. i tried to do a 'select' to this column and the data did return (after i coverted it back to string) but hebrew chars become '?' how do i solve this problem ? what I should pay attention to ? thanks, gadym

    Read the article

  • Mercurial CLI is slow in C#?

    - by pATCheS
    I'm writing a utility in C# that will make managing multiple Mercurial repositories easier for the way my team is using it. However, it seems that there is always about a 300 to 400 millisecond delay before I get anything back from hg.exe. I'm using the code below to run hg.exe and hgtk.exe (TortoiseHg's GUI). The code currently includes a Stopwatch and some variables for timing purposes. The delay is roughly the same on multiple runs within the same session. I have also tried specifying the exact path of hg.exe, and got the same result. static string RunCommand(string executable, string path, string arguments) { var psi = new ProcessStartInfo() { FileName = executable, Arguments = arguments, WorkingDirectory = path, UseShellExecute = false, RedirectStandardError = true, RedirectStandardInput = true, RedirectStandardOutput = true, WindowStyle = ProcessWindowStyle.Maximized, CreateNoWindow = true }; var sbOut = new StringBuilder(); var sbErr = new StringBuilder(); var sw = new Stopwatch(); sw.Start(); var process = Process.Start(psi); TimeSpan firstRead = TimeSpan.Zero; process.OutputDataReceived += (s, e) => { if (firstRead == TimeSpan.Zero) { firstRead = sw.Elapsed; } sbOut.Append(e.Data); }; process.ErrorDataReceived += (s, e) => sbErr.Append(e.Data); process.BeginOutputReadLine(); process.BeginErrorReadLine(); var eventsStarted = sw.Elapsed; process.WaitForExit(); var processExited = sw.Elapsed; sw.Reset(); if (process.ExitCode != 0 || sbErr.Length > 0) { Error.Mercurial(process.ExitCode, sbOut.ToString(), sbErr.ToString()); } return sbOut.ToString(); } Any ideas on how I can speed things up? As it is, I'm going to have to do a lot of caching in addition to threading to keep the UI snappy.

    Read the article

  • What about race condition in multithreaded reading?

    - by themoob
    Hi, According to an article on IBM.com, "a race condition is a situation in which two or more threads or processes are reading or writing some shared data, and the final result depends on the timing of how the threads are scheduled. Race conditions can lead to unpredictable results and subtle program bugs." . Although the article concerns Java, I have in general been taught the same definition. As far as I know, simple operation of reading from RAM is composed of setting the states of specific input lines (address, read etc.) and reading the states of output lines. This is an operation that obviously cannot be executed simultaneously by two devices and has to be serialized. Now let's suppose we have a situation when a couple of threads access an object in memory. In theory, this access should be serialized in order to prevent race conditions. But e.g. the readers/writers algorithm assumes that an arbitrary number of readers can use the shared memory at the same time. So, the question is: does one have to implement an exclusive lock for read when using multithreading (in WinAPI e.g.)? If not, why? Where is this control implemented - OS, hardware? Best regards, Kuba

    Read the article

  • Improving I/O performance in C++ programs[external merge sort]

    - by Ajay
    I am currently working on a project involving external merge-sort using replacement-selection and k-way merge. I have implemented the project in C++[runs on linux]. Its very simple and right now deals with only fixed sized records. For reading & writing I use (i/o)fstream classes. After executing the program for few iterations, I noticed that I/O read blocks for requests of size more than 4K(typical block size). Infact giving buffer sizes greater than 4K causes performance to decrease. The output operations does not seem to need buffering, linux seemed to take care of buffering output. So I issue a write(record) instead of maintaining special buffer of writes and then flushing them out at once using write(records[]). But the performance of the application does not seem to be great. How could I improve the performance? Should I maintain special I/O threads to take care of reading blocks or are there existing C++ classes providing this abstraction already?(Something like BufferedInputStream in java)

    Read the article

  • Program to find the result of primitive recursive functions

    - by alphomega
    I'm writing a program to solve the result of primitive recursive functions: 1 --Basic functions------------------------------ 2 3 --Zero function 4 z :: Int -> Int 5 z = \_ -> 0 6 7 --Successor function 8 s :: Int -> Int 9 s = \x -> (x + 1) 10 11 --Identity/Projection function generator 12 idnm :: Int -> Int -> ([Int] -> Int) 13 idnm n m = \(x:xs) -> ((x:xs) !! (m-1)) 14 15 --Constructors-------------------------------- 16 17 --Composition constructor 18 cn :: ([Int] -> Int) -> [([Int] -> Int)] -> ([Int] -> Int) 19 cn f [] = \(x:xs) -> f 20 cn f (g:gs) = \(x:xs) -> (cn (f (g (x:xs))) gs) these functions and constructors are defined here: http://en.wikipedia.org/wiki/Primitive_recursive_function The issue is with my attempt to create the compositon constructor, cn. When it gets to the base case, f is no longer a partial application, but a result of the function. Yet the function expects a function as the first argument. How can I deal with this problem? Thanks.

    Read the article

  • VB.NET 2008, Windows 7 and saving files

    - by James Brauman
    Hello, We have to learn VB.NET for the semester, my experience lies mainly with C# - not that this should make a difference to this particular problem. I've used just about the most simple way to save a file using the .NET framework, but Windows 7 won't let me save the file anywhere (or anywhere that I have found yet). Here is the code I am using to save a text file. Dim dialog As FolderBrowserDialog = New FolderBrowserDialog() Dim saveLocation As String = dialog.SelectedPath ... Build up output string ... Try ' Try to write the file. My.Computer.FileSystem.WriteAllText(saveLocation, output, False) Catch PermissionEx As UnauthorizedAccessException ' We do not have permissions to save in this folder. MessageBox.Show("Do not have permissions to save file to the folder specified. Please try saving somewhere different.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error) Catch Ex As Exception ' Catch any exceptions that occured when trying to write the file. MessageBox.Show("Writing the file was not successful.", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error) End Try The problem is that this using this code throws an UnauthorizedAccessException no matter where I try to save the file. I've tried running the .exe file as administrator, and the IDE as administrator. Is this just Windows 7 being overprotective? And if so, what can I do to solve this problem? The requirements state that I be able to save a file! Thanks.

    Read the article

  • Modelling in Agile Development

    - by bertzzie
    I'm writing a bachelor dissertation report where I'm developing a system with Agile methodology. Given that the development is an one man show, of course the "Agile" I did was not really agile at all (from my understanding at least). So I want some perspective from SO crowds, who is of course a professional, real world, developer with tons of experience. I think real world experience is better than the theory and experiments that I did. My question is: Do we model during development time when using Agile? UML? DFD? Or a Functional Specification is enough1? If modelling is not really necessary, what do we use to communicate to the user, as the user almost always won't understand UML or DFD? For my system, I use UI & UX Design with heavy prototyping, but then I don't have time to draw UML any more. Which one is better? 1 http://www.joelonsoftware.com/articles/fog0000000036.html I hope the question's not "subjective and argumentative" as I know this question exist because of my lack of understanding in the agile development. If it is, could someone just give me a pointer or reference about that? Possible duplicate: Do you use UML in Agile development practices?

    Read the article

  • Should I invest in GraniteDS for Flex + Java development?

    - by Boden
    I'm new to Flex development, and RIAs in general. I've got a CRUD-style Java + Spring + Hibernate service on top of which I'm writing a Flex UI. Currently I'm using BlazeDS. This is an internal application running on a local network. It's become apparent to me that the way RIAs work is more similar to a desktop application than a web application in that we load up the entire model and work with it directly on the client (or at least the portion that we're interested in). This doesn't really jive well with BlazeDS because really it only supports remoting and not data management, thus it can become a lot of extra work to make sure that clients are in sync and to avoid reloading the model which can be large (especially since lazy loading is not possible). So it feels like what I'm left with is a situation where I have to treat my Flex application more like a regular old web application where I do a lot of fine grained loading of data. LiveCycle is too expensive. The free version of WebOrb for Java really only does remoting. Enter GraniteDS. As far as I can determine, it's the only free solution out there that has many of the data management features of LiveCycle. I've started to go through its documentation a bit and suddenly feel like it's yet another quagmire of framework that I'll have to learn just to get an application running. So my question(s) to the StackOverflow audience is: 1) do you recommend GraniteDS, especially if my current Java stack is Spring + Hibernate? 2) at what point do you feel like it starts to pay off? That is, at what level of application complexity do you feel that using GraniteDS really starts to make development that much better? In what ways?

    Read the article

  • How do I create an inheritable Semaphore in .NET?

    - by pauldoo
    I am trying to create a Win32 Semaphore object which is inheritable. This means that any child processes I launch may automatically have the right to act on the same Win32 object. My code currently looks as follows: Semaphore semaphore = new Semaphore(0, 10); Process process = Process.Start(pathToExecutable, arguments); But the semaphore object in this code cannot be used by the child process. The code I am writing is a port of come working C++. The old C++ code achieves this by the following: SECURITY_ATTRIBUTES security = {0}; security.nLength = sizeof(security); security.bInheritHandle = TRUE; HANDLE semaphore = CreateSemaphore(&security, 0, LONG_MAX, NULL); Then later when CreateProcess is called the bInheritHandles argument is set to TRUE. (In both the C# and C++ case I am using the same child process (which is C++). It takes the semaphore ID on command line, and uses the value directly in a call to ReleaseSemaphore.) I suspect I need to construct a special SemaphoreSecurity or ProcessStartInfo object, but I haven't figured it out yet.

    Read the article

  • How do people handle foreign keys on clients when synchronizing to master db

    - by excsm
    Hi, I'm writing an application with offline support. i.e. browser/mobile clients sync commands to the master db every so often. I'm using uuid's on both client and server-side. When synching up to the server, the servre will return a map of local uuids (luid) to server uuids (suid). Upon receiving this map, clients updated their records suid attributes with the appropriate values. However, say a client record, e.g. a todo, has an attribute 'list_id' which holds the foreign key to the todos' list record. I use luids in foreign_keys on clients. However, when that attribute is sent over to the server, it would dirty the server db with luids rather than the suid the server is using. My current solution, is for the master server to keep a record of the mappings of luids to suids (per client id) and for each foreign key in a command, look up the suid for that particular client and use the suid instead. I'm wondering wether others have come across thus problem and if so how they have solved it? Is there a more efficient, simpler way? I took a look at this question "Synchronizing one or more databases with a master database - Foreign keys (5)" and someone seemed to suggest my current solution as one option, composite keys using suids and autoincrementing sequences and another option using -ve ids for client ids and then updating all negative ids with the suids. Both of these other options seem like a lot more work. Thanks, Saimon

    Read the article

  • Doctrine 2 entity cannot be saved with its many-to-many related entities

    - by user1333185
    I'm writing a project in ZF and have many-to-many related accounts and videos entities and I provide the account with the videos collection. When I try to save the account I receive the following error: A new entity was found through the relationship 'Entities\Account#playlistVideos' that was not configured to cascade persist operations for entity: Entities\Video@000000004d5f02ef00000000196757dc. Explicitly persist the new entity or configure cascading persist operations on the relationship. If you cannot find out which entity causes the problem implement 'Entities\Video#__toString()' to get a clue. I found a suggested solution with cascade={"persist"} which should allow videos to be saved together with the account by only specifying $this-em-persist($account) but then I get the error: Fatal error: method_exists(): The script tried to execute a method or access a property of an incomplete object. Please ensure that the class definition "Proxies\EntitiesAccountProxy" of the object you are trying to operate on was loaded before unserialize() gets called or provide a __autoload() function to load the class definition in... The same happens when I try to manually persist videos and account. Thank you for the help in advance.

    Read the article

  • How to add dimensions to dynamic img elements (Updated)

    - by Mohammad
    I use a Json call to get a list of image addresses, then I add them individually to a div like this. Unfortunately the image dimension is not part of the Json information. <div id="container"> <img src="A.jpg" alt="" /> <img src="B.jpg" alt="" /> ... </div> Do any of you JQuery geniuses know of a code that would flawlessly and dynamically add the true Width and Height to each img element in the container as soon as each individual one is rendered? I was thinking maybe the code could do a image width check width > 0 to evaluate when the image has actually been rendered, then fire. But I wouldn't know how to go about that and make it work stably. How is the best way of going about this? Thank you so much! Update, As the answers point out, adding Width or Height to the elements is pretty routine. The problem here is actually writing a code that would know when to do that. And evaluate that condition for each image not the page as a whole.

    Read the article

  • Merge computed data from two tables back into one of them

    - by Tyler McHenry
    I have the following situation (as a reduced example). Two tables, Measures1 and Measures2, each of which store an ID, a Weight in grams, and optionally a Volume in fluid onces. (In reality, Measures1 has a good deal of other data that is irrelevant here) Contents of Measures1: +----+----------+--------+ | ID | Weight | Volume | +----+----------+--------+ | 1 | 100.0000 | NULL | | 2 | 200.0000 | NULL | | 3 | 150.0000 | NULL | | 4 | 325.0000 | NULL | +----+----------+--------+ Contents of Measures2: +----+----------+----------+ | ID | Weight | Volume | +----+----------+----------+ | 1 | 75.0000 | 10.0000 | | 2 | 400.0000 | 64.0000 | | 3 | 100.0000 | 22.0000 | | 4 | 500.0000 | 100.0000 | +----+----------+----------+ These tables describe equivalent weights and volumes of a substance. E.g. 10 fluid ounces of substance 1 weighs 75 grams. The IDs are related: ID 1 in Measures1 is the same substance as ID 1 in Measures2. What I want to do is fill in the NULL volumes in Measures1 using the information in Measures2, but keeping the weights from Measures1 (then, ultimately, I can drop the Measures2 table, as it will be redundant). For the sake of simplicity, assume that all volumes in Measures1 are NULL and all volumes in Measures2 are not. I can compute the volumes I want to fill in with the following query: SELECT Measures1.ID, Measures1.Weight, (Measures2.Volume * (Measures1.Weight / Measures2.Weight)) AS DesiredVolume FROM Measures1 JOIN Measures2 ON Measures1.ID = Measures2.ID; Producing: +----+----------+-----------------+ | ID | Weight | DesiredVolume | +----+----------+-----------------+ | 4 | 325.0000 | 65.000000000000 | | 3 | 150.0000 | 33.000000000000 | | 2 | 200.0000 | 32.000000000000 | | 1 | 100.0000 | 13.333333333333 | +----+----------+-----------------+ But I am at a loss for how to actually insert these computed values into the Measures1 table. Preferably, I would like to be able to do it with a single query, rather than writing a script or stored procedure that iterates through every ID in Measures1. But even then I am worried that this might not be possible because the MySQL documentation says that you can't use a table in an UPDATE query and a SELECT subquery at the same time, and I think any solution would need to do that. I know that one workaround might be to create a new table with the results of the above query (also selecting all of the other non-Volume fields in Measures1) and then drop both tables and replace Measures1 with the newly-created table, but I was wondering if there was any better way to do it that I am missing.

    Read the article

  • What is the difference between calling Stream.Write and using a StreamWriter?

    - by John Nelson
    What is the difference between instantiating a Stream object, such as MemoryStream and calling the memoryStream.Write() method to write to the stream, and instantiating a StreamWriter object with the stream and calling streamWriter.Write()? Consider the following scenario: You have a method that takes a Stream, writes a value, and returns it. The stream is read from later on, so the position must be reset. There are two possible ways of doing it (both seem to work). // Instantiate a MemoryStream somewhere // - Passed to the following two methods MemoryStream memoryStream = new MemoryStream(); // Not using a StreamWriter private static Stream WriteToStream(Stream stream, string value) { stream.Write(Encoding.Default.GetBytes(value), 0, value.Length); stream.Flush(); stream.Position = 0; return stream; } // Using a StreamWriter private static Stream WriteToStreamWithWriter(Stream stream, string value) { StreamWriter sw = new StreamWriter(stream); sw.Write(value, 0, value.Length); sw.Flush(); stream.Position = 0; return stream; } This is partially a scope problem, as I don't want to close the stream after writing to it since it will be read from later. I also certainly don't want to dispose it either, because that will close my stream. The difference seems to be that not using a StreamWriter introduces a direct dependency on Encoding.Default, but I'm not sure that's a very big deal. What's the difference, if any?

    Read the article

  • Trusted Folder/Drive Picker in the Browser

    - by kylepfritz
    I'd like to write a Folder/Drive picker the runs in the browser and allows a user to select files to upload to a webservice. The primary usage would be selecting folders or a whole CD and uploading them to the web with their directory structure in tact. I'm imagining something akin to Jumploader but which automatically enumerates external drives and CDs. I remember a version of Facebook's picture uploader that could do this sort of enumeration and was java-based but it has since been replaced by a much slicker plugin-based architecture. Because the application needs to run at very high trust, I think I'm limited to old-school java applets. Is there another alternative? I'm hesitant to start down the plugin route because of the necessity of writing one for both IE and Mozilla at a minimum. Are there good places to get started there? On the applet front, I built a clunky prototype to demonstrate that I can enumerate devices and list files. It runs fine in the applet viewer but I don't think I have the security settings configured correctly for it to run in the browser at full trust. Currently I don't get any drives back when I run it in the browser. Applet Prototype: public class Loader extends javax.swing.JApplet { ... private void EnumerateDrives(java.awt.event.ActionEvent evt) { File[] roots = File.listRoots(); StringBuilder b = new StringBuilder(); for (File root : roots) { b.append(root.getAbsolutePath() + ", "); } jLabel.setText(b.toString()); } } Embed Html: <p>Loader:</p> <script src="http://www.java.com/js/deployJava.js" type="text/javascript" ></script> <script> var attributes = {code:'org.exampl.Loader.Loader.class', archive:'Loader/dist/Loader.jar', width:600, height:400} ; var parameters = {}; deployJava.runApplet(attributes, parameters, '1.6');

    Read the article

  • execute a string of PHP code on the command line

    - by Matthew J Morrison
    I'd like to be able to run a line of PHP code on the command line similar to how the following options work: :~> perl -e "print 'hi';" :~> python -c "print 'hi'" :~> ruby -e "puts 'hi'" I'd like to be able to do: :~> php "echo 'hi';" I've read that there is a -r option that can do what I need for php, however it doesn't appear to be available when I try to use it. I've tried using PHP 5.2.13 and PHP 4.4.9 and neither have an -r option available. I wrote this script (that I called run_php.php) - which works, but I'm not a huge fan of it just because I feel like there should be a more "correct" way to do it. #!/usr/bin/php5 -q <?php echo eval($argv[1]); ?> My question is: is there a -r option? If so, why is it not available when I run --help? If there is no -r option, what is the best way to do this (without writing an intermediary script if possible)? Thanks!

    Read the article

  • MSSQL 2005 FOR XML

    - by Lima
    Hi, I am wanting to export data from a table to a specifically formated XML file. I am fairly new to XML files, so what I am after may be quite obvious but I just cant find what I am looking for on the net. The format of the XML results I need are: <data> <event start="May 28 2006 09:00:00 GMT" end="Jun 15 2006 09:00:00 GMT" isDuration="true" title="Writing Timeline documentation" image="http://simile.mit.edu/images/csail-logo.gif"> A few days to write some documentation </event> </data> My table structure is: name VARCHAR(50), description VARCHAR(255), startDate DATETIME, endDate DATETIME (I am not too interested in the XML fields image or isDuration at this point in time). I have tried: SELECT [name] ,[description] ,[startDate] ,[endTime] FROM [testing].[dbo].[time_timeline] FOR XML RAW('event'), ROOT('data'), type Which gives me: <data> <event name="Test1" description="Test 1 Description...." startDate="1900-01-01T00:00:00" endTime="1900-01-01T00:00:00" /> <event name="Test2" description="Test 2 Description...." startDate="1900-01-01T00:00:00" endTime="1900-01-01T00:00:00" /> </data> What I am missing, is the description needs to be outside of the event attributes, and there needs to be a tag. Is anyone able to point me in the correct direction, or point me to a tutorial or similar on how to accomplish this? Thanks, Matt

    Read the article

  • How do I cast a void pointer to a struct in C?

    - by Rowhawn
    In a project I'm writing code for, I have a void pointer, "implementation", which is a member of a "Hash_map" struct, and points to an "Array_hash_map" struct. The concepts behind this project are not very realistic, but bear with me. The specifications of the project ask that I cast the void pointer "implementation" to an "Array_hash_map" before I can use it in any functions. My question, specifically is, what do I do in the functions to cast the void pointers to the desired struct? Is there one statement at the top of each function that casts them or do I make the cast every time I use "implementation"? Here are the typedefs the structs of a Hash_map and Array_hash_map as well as a couple functions making use of them. typedef struct { Key_compare_fn key_compare_fn; Key_delete_fn key_delete_fn; Data_compare_fn data_compare_fn; Data_delete_fn data_delete_fn; void *implementation; } Hash_map; typedef struct Array_hash_map{ struct Unit *array; int size; int capacity; } Array_hash_map; typedef struct Unit{ Key key; Data data; } Unit; functions: /* Sets the value parameter to the value associated with the key parameter in the Hash_map. */ int get(Hash_map *map, Key key, Data *value){ int i; if (map == NULL || value == NULL) return 0; for (i = 0; i < map->implementation->size; i++){ if (map->key_compare_fn(map->implementation->array[i].key, key) == 0){ *value = map->implementation->array[i].data; return 1; } } return 0; } /* Returns the number of values that can be stored in the Hash_map, since it is represented by an array. */ int current_capacity(Hash_map map){ return map.implementation->capacity; }

    Read the article

  • Ways of breaking down SQL transactional/call data into reports -- 'square data'?

    - by RizwanK
    I've got a large database of call-traffic information (although the question could be answered with any generic data set.) For instance, a row contains : call endpoint server (endpoint_name) call endpoint status (sip_disconnect_reason) call destination (destination) call completed (duration) [duration 0 is completed] call account group (account_group) It's pretty easy to run SQL reports against the data, i.e. select count(*), endpoint_name from calls where duration0 group by endpoint_name select count(*),destination from calls where blah group by destination I've been calling this filtering or breakdown reports (I get the number of calls per carrier, etc.). Add another breakdown, and you've got two breakdowns, a la select count(*), endpoint_name, sip_disconnect_reason from calls where duration=0 group by endpoint_name, sip_disconnect_reason Of course, if you keep adding breakdowns, you end up making super-large reports and slicing your data so thin that you can't extract any trends from it. So my question is this : Is there a name for this sort of method of report writing? (I've heard words like squares, slicing and breakdown reports applied to them) --- I'm looking for a Python/Reporting toolkit that I can use to make these easier to generate for my end users. aside : Are there other ways of representing transactional data that might be useful rather than the above method? Thanks,

    Read the article

  • Why is TRest in Tuple<T1... TRest> not constrained?

    - by Anthony Pegram
    In a Tuple, if you have more than 7 items, you can provide an 8th item that is another tuple and define up to 7 items, and then another tuple as the 8th and on and on down the line. However, there is no constraint on the 8th item at compile time. For example, this is legal code for the compiler: var tuple = new Tuple<int, int, int, int, int, int, int, double> (1, 1, 1, 1, 1, 1, 1, 1d); Even though the intellisense documentation says that TRest must be a Tuple. You do not get any error when writing or building the code, it does not manifest until runtime in the form of an ArgumentException. You can roughly implement a Tuple in a few minutes, complete with a Tuple-constrained 8th item. I just wonder why it was left off the current implementation? Is it possibly a forward-compatibility issue where they could add more elements with a hypothetical C# 5? Short version of rough implementation interface IMyTuple { } class MyTuple<T1> : IMyTuple { public T1 Item1 { get; private set; } public MyTuple(T1 item1) { Item1 = item1; } } class MyTuple<T1, T2> : MyTuple<T1> { public T2 Item2 { get; private set; } public MyTuple(T1 item1, T2 item2) : base(item1) { Item2 = item2; } } class MyTuple<T1, T2, TRest> : MyTuple<T1, T2> where TRest : IMyTuple { public TRest Rest { get; private set; } public MyTuple(T1 item1, T2 item2, TRest rest) : base(item1, item2) { Rest = rest; } } ... var mytuple = new MyTuple<int, int, MyTuple<int>>(1, 1, new MyTuple<int>(1)); // legal var mytuple2 = new MyTuple<int, int, int>(1, 2, 3); // illegal

    Read the article

  • Using Dispose on a Singleton to Cleanup Resources

    - by ImperialLion
    The question I have might be more to do with semantics than with the actual use of IDisposable. I am working on implementing a singleton class that is in charge of managing a database instance that is created during the execution of the application. When the application closes this database should be deleted. Right now I have this delete being handled by a Cleanup() method of the singleton that the application calls when it is closing. As I was writing the documentation for Cleanup() it struck me that I was describing what a Dispose() method should be used for i.e. cleaning up resources. I had originally not implemented IDisposable because it seemed out of place in my singleton, because I didn't want anything to dispose the singleton itself. There isn't currently, but in the future might be a reason that this Cleanup() might be called but the singleton should will need to still exist. I think I can include GC.SuppressFinalize(this); in the Dispose method to make this feasible. My question therefore is multi-parted: 1) Is implementing IDisposable on a singleton fundamentally a bad idea? 2) Am I just mixing semantics here by having a Cleanup() instead of a Dispose() and since I'm disposing resources I really should use a dispose? 3) Will implementing 'Dispose()' with GC.SuppressFinalize(this); make it so my singleton is not actually destroyed in the case I want it to live after a call to clean-up the database.

    Read the article

  • Best MAILING LIST solution for a CONFERENCE and its 400 participants

    - by Ole Morten Amundsen
    Dear community, what would you recommend for mailling lists? The conference is non-profit, named Smidig2010 (=Agile2010 in norwegian), will have about 400-500 participants 16.-17.november. At the time of writing this, we have not opened for registration, but would like people to be able to participate, ask questions, get informed and get inspired. We've used a forum before, but forums don't seem to be a good fit for this. I would like to set up a mailinglist, It'll have to be KISS, for the users: enter your email (a input box at our site smidig2010.no) get a confirmation mail, click a link. start posting, reading through archives, answering others etc. I like the look and feel of googlegroups, but I don't like the signup/account creation overhead imposed on the user. I've heard you may combine googlegroups with mailman and stuff, but, yeah, I can't believe our own incompetence on this subject! Btw, we are mostly developers and the conference app is being written in ruby on rails. Being non-profit, we prefer free, but we take everything into consideration. Any suggestions?

    Read the article

  • LINQ to SQL: Reusable expression for property?

    - by coenvdwel
    Pardon me for being unable to phrase the title more exact. Basically, I have three LINQ objects linked to tables. One is Product, the other is Company and the last is a mapping table Mapping to store what Company sells which products and by which ID this Company refers to this Product. I am now retrieving a list of products as follows: var options = new DataLoadOptions(); options.LoadWith<Product>(p => p.Mappings); context.LoadOptions = options; var products = ( from p in context.Products select new { ProductID = p.ProductID, //BackendProductID = p.BackendProductID, BackendProductID = (p.Mappings.Count == 0) ? "None" : (p.Mappings.Count > 1) ? "Multiple" : p.Mappings.First().BackendProductID, Description = p.Description } ).ToList(); This does a single query retrieving the information I want. But I want to be able to move the logic behind the BackendProductID into the LINQ object so I can use the commented line instead of the annoyingly nested ternary operator statements for neatness and re-usability. So I added the following property to the Product object: public string BackendProductID { get { if (Mappings.Count == 0) return "None"; if (Mappings.Count > 1) return "Multiple"; return Mappings.First().BackendProductID; } } The list is still the same, but it now does a query for every single Product to get it's BackendProductID. The code is neater and re-usable, but the performance now is terrible. What I need is some kind of Expression or Delegate but I couldn't get my head around writing one. It always ended up querying for every single product, still. Any help would be appreciated!

    Read the article

< Previous Page | 575 576 577 578 579 580 581 582 583 584 585 586  | Next Page >