Search Results

Search found 1742 results on 70 pages for 'combine'.

Page 56/70 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Sending items in a LINQ sequence to a method that returns void

    - by generalt
    Hello all. Often while I'm dealing with LINQ sequences, I want to send each item to a method returning void, avoiding a foreach loop. However, I haven't found an elegant way to do this. Today, I wrote the following code: private StreamWriter _sw; private void streamToFile(List<ErrorEntry> errors) { if (_sw == null) { _sw = new StreamWriter(Path.Combine (Path.GetDirectoryName(_targetDatabasePath), "errors.txt")); } Func<ErrorEntry, bool> writeSelector = (e) => { _sw.WriteLine(getTabDelimititedLine(e)); return true; }; errors.Select(writeSelector); _sw.Flush(); } As you can see, I write a lambda function that just returns true, and I realize that the Select method will return a sequence of booleans- I'll just ignore that sequence. However, this seems a little bit noobish and jank. Is there any elegant way to do this? Or am I just misapplying LINQ? Thanks.

    Read the article

  • ZIP Numerous Blob Files

    - by Michael
    I have a database table that contains numerous PDF blob files. I am attempting to combine all of the files into a single ZIP file that I can download and then print. Please help! <?php include 'config.php'; include 'connect.php'; $session= $_GET[session]; $query = " SELECT $tbl_uploads.username, $tbl_uploads.description, $tbl_uploads.type, $tbl_uploads.size, $tbl_uploads.content, $tbl_members.session FROM $tbl_uploads LEFT JOIN $tbl_members ON $tbl_uploads.username = $tbl_members.username WHERE $tbl_members.session= '$session'"; $result = mysql_query($query) or die('Error, query failed'); while(list($username, $description, $type, $size, $content) = mysql_fetch_array($result)) { header("Content-length: $size"); header("Content-type: $type"); header("Content-Disposition: inline; filename=$username-$description.pdf"); echo $content; } $files = array('File 1 from database', 'File 2 from database'); $zip = new ZipArchive; $zip->open('file.zip', ZipArchive::CREATE); foreach ($files as $file) { $zip->addFile($file); } $zip->close(); header('Content-Type: application/zip'); header('Content-disposition: attachment; filename=filename.zip'); header('Content-Length: ' . filesize($zipfilename)); readfile($zipname); mysql_close($link); exit; ?>

    Read the article

  • Help decoupling Crystal Report from CrystalReportViewer

    - by John at CashCommons
    I'm using Visual Studio 2005 with VB.NET. I have a number of Crystal Reports, each with their own associated dialog resource containing a CrystalReportViewer. The class definitions look like this: Imports System.Windows.Forms Imports CrystalDecisions.CrystalReports.Engine Imports CrystalDecisions.Shared Public Class dlgForMyReport Private theReport As New myCrystalReport Public theItems As New List(Of MyItem) Private Sub OK_Button_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles OK_Button.Click Me.DialogResult = System.Windows.Forms.DialogResult.OK Me.Close() End Sub Private Sub Cancel_Button_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Cancel_Button.Click Me.DialogResult = System.Windows.Forms.DialogResult.Cancel Me.Close() End Sub Private Sub dlgForMyReport_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load theReport.SetDataSource(theItems) 'Do a bunch of stuff here to set data items in theReport Me.myCrystalReportViewer.ReportSource = theReport End Sub End Class I basically instantiate the dialog, set theItems to the list I want, and call ShowDialog. I now have a need to combine several of these reports into one report (possibly like this) but the code that loads up the fields in the report is in the dialog. How would I go about decoupling the report initialization from the dialog? Thanks!

    Read the article

  • t-sql most efficient row to column? crosstab for xml path, pivot

    - by ajberry
    I am looking for the most performant way to turn rows into columns. I have a requirement to output the contents of the db (not actual schema below, but concept is similar) in both fixed width and delimited formats. The below FOR XML PATH query gives me the result I want, but when dealing with anything other than small amounts of data, can take awhile. select orderid ,REPLACE(( SELECT ' ' + CAST(ProductId as varchar) FROM _details d WHERE d.OrderId = o.OrderId ORDER BY d.OrderId,d.DetailId FOR XML PATH('') ),'&#x20;','') as Products from _orders o I've looked at pivot but most of the examples I have found are aggregating information. I just want to combine the child rows and tack them onto the parent. I should also point out I don't need to deal with the column names either since the output of the child rows will either be a fixed width string or a delimited string. For example, given the following tables: OrderId CustomerId ----------- ----------- 1 1 2 2 3 3 DetailId OrderId ProductId ----------- ----------- ----------- 1 1 100 2 1 158 3 1 234 4 2 125 5 3 101 6 3 105 7 3 212 8 3 250 for an order I need to output: orderid Products ----------- ----------------------- 1 100 158 234 2 125 3 101 105 212 250 or orderid Products ----------- ----------------------- 1 100|158|234 2 125 3 101|105|212|250 Thoughts or suggestions? I am using SQL Server 2k5. Example Setup: create table _orders ( OrderId int identity(1,1) primary key nonclustered ,CustomerId int ) create table _details ( DetailId int identity(1,1) primary key nonclustered ,OrderId int ,ProductId int ) insert into _orders (CustomerId) select 1 union select 2 union select 3 insert into _details (OrderId,ProductId) select 1,100 union select 1,158 union select 1,234 union select 2,125 union select 3,105 union select 3,101 union select 3,212 union select 3,250 using FOR XML PATH: select orderid ,REPLACE(( SELECT ' ' + CAST(ProductId as varchar) FROM _details d WHERE d.OrderId = o.OrderId ORDER BY d.OrderId,d.DetailId FOR XML PATH('') ),'&#x20;','') as Products from _orders o which outputs what I want, however is very slow for large amounts of data. One of the child tables is over 2 million rows, pushing the processing time out to ~ 4 hours. orderid Products ----------- ----------------------- 1 100 158 234 2 125 3 101 105 212 250

    Read the article

  • When do you tag your software project?

    - by WilhelmTell of Purple-Magenta
    I realize there are various kinds of software projects: commercial (for John Doe) industrial (for Mr. Montgomery Burns) successful open-source (with audience larger than, say, 10 people) personal projects (with audience size in the vicinity of 1). each of which release a new version of their product on difference conditions. I'm particularly interested in the case of personal projects and open-source projects. When, or under what conditions, do you make a new release of any kind? Do you subscribe to a fixed recurring deadline such as every two weeks? Do you commit to a release of at least 10 minor fixes, or one major fix? Do you combine the two conditions such as at least one condition must hold, or both must hold? I reckon this is a subjective question. I ask this question in light of searching for tricks to keep my projects alive and kicking. Sometimes my projects are active but look as if they aren't because I don't have the confidence to make a release or a tag of any sort for a long time -- in the order of months.

    Read the article

  • Retrieving dll version info via Win32 - VerQueryValue(...) crashes under Win7 x64

    - by user256890
    The respected open source .NET wrapper implementation (SharpBITS) of Windows BITS services fails identifying the underlying BITS version under Win7 x64. Here is the source code that fails. NativeMethods are native Win32 calls wrapped by .NET methods and decorated via DllImport attribute. private static BitsVersion GetBitsVersion() { try { string fileName = Path.Combine( System.Environment.SystemDirectory, "qmgr.dll"); int handle = 0; int size = NativeMethods.GetFileVersionInfoSize(fileName, out handle); if (size == 0) return BitsVersion.Bits0_0; byte[] buffer = new byte[size]; if (!NativeMethods.GetFileVersionInfo(fileName, handle, size, buffer)) { return BitsVersion.Bits0_0; } IntPtr subBlock = IntPtr.Zero; uint len = 0; if (!NativeMethods.VerQueryValue(buffer, @"\VarFileInfo\Translation", out subBlock, out len)) { return BitsVersion.Bits0_0; } int block1 = Marshal.ReadInt16(subBlock); int block2 = Marshal.ReadInt16((IntPtr)((int)subBlock + 2 )); string spv = string.Format( @"\StringFileInfo\{0:X4}{1:X4}\ProductVersion", block1, block2); string versionInfo; if (!NativeMethods.VerQueryValue(buffer, spv, out versionInfo, out len)) { return BitsVersion.Bits0_0; } ... The implementation follows the MSDN instructions by the letter. Still during the second VerQueryValue(...) call, the application crashes and kills the debug session without hesitation. Just a little more debug info right before the crash: spv = "\StringFileInfo\040904B0\ProductVersion" buffer = byte[1900] - full with binary data block1 = 1033 block2 = 1200 I looked at the targeted "C:\Windows\System32\qmgr.dll" file (The implementation of BITS) via Windows. It says that the Product Version is 7.5.7600.16385. Instead of crashing, this value should return in the verionInfo string. Any advice?

    Read the article

  • SO what RDF database do i use for product attribute situation initially i thought about using EAV?

    - by keisimone
    Hi, i have a similar issue as espoused in http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters i am convinced to use RDF now. only because of one of the comments made by Bill Karwin in the answer to the above issue but i already have a database in mysql and the code is in php. 1) So what RDF database should I use? 2) do i combine the approach? meaning i have a class table inheritance in the mysql database and just the weird product attributes in the RDF? I dont think i should move everything to a RDF database since it is only just products and the wide array of possible attributes and value that is giving me the problem. 3) what php resources, articles should i look at that will help me better in the creation of this? 4) more articles or resources that helps me to better understand RDF in the context of the above challenge of building something that will better hold all sorts of products' attributes and values will be greatly appreciated. i tend to work better when i have a conceptual understanding of what is going on. Do bear in mind i am a complete novice to this and my knowledge of programming and database is average at best.

    Read the article

  • Cakephp how to use Set Class to make an assoc array?

    - by michael
    I have the output array from a $Model-find() query which also pulls data from a hasMany relationship: Array( [Parent] => Array ( [id] => 1 ) [Child] => Array ( [0] => Array ( [id] => aaa [score] => 3 [src] => stage6/tn~4bbb38cc-0018-49bf-96a9-11a0f67883f5.jpg [parent_id] => 1 ) [1] => Array ( [id] => bbb [score] => 5 [src] => stage0/tn~4bbb38cc-00ac-4b25-b074-11a0f67883f5.jpg [parent_id] => 1 ) [2] => Array ( [id] => ccc [score] => 2 [src] => stage4/tn~4bbb38cc-01c8-44bd-b71d-11a0f67883f5.jpg [parent_id] => 1 ) ) ) I'd like to transform this output into something like this, where the child id is the key to additional child attributes: Array( [aaa] => Array ( [score] => 3 [src] => stage6/tn~4bbb38cc-0018-49bf-96a9-11a0f67883f5.jpg ) [bbb] => Array ( [score] => 5 [src] => stage0/tn~4bbb38cc-00ac-4b25-b074-11a0f67883f5.jpg ) [ccc] => Array ( [score] => 2 [src] => stage4/tn~4bbb38cc-01c8-44bd-b71d-11a0f67883f5.jpg ) } Is there an easy way to use Set::extract, Set::combine, Set::insert, etc. to do this efficiently? I cannot figure it out.

    Read the article

  • Database design and foreign keys: Where should they be added in related tables?

    - by Carvell Fenton
    I have a relatively simple subset of tables in my database for tracking something called sessions. These are academic sessions (think offerings of a particular program). The tables to represent a sessions information are: sessions session_terms session_subjects session_mark_item_info session_marks All of these tables have their own primary keys, and are like a tree, in that sessions have terms, terms have subjects, subjects have mark items, etc. So each on would have at least its "parent's" foreign key. My question is, design wise is it a good idea to include the sessions primary key in the other tables as a foreign key to easily select related session items, or is that too much redundency? If I include the session foreign key (or all parent foreign keys from tables up the heirarchy) in all the tables, I can easily select all the marks for a session. As an example, something like SELECT mark FROM session_marks WHERE sessionID=... If I don't, then I would have to combine selects with something like WHERE something IN (SELECT... Which approach is "more correct" or efficient? Thanks in advance!

    Read the article

  • Mix Audio tracks with offset in SOX

    - by Laramie
    From ASP.Net, I am using FFMPEG to convert flv files on a Flash Media Server to wavs that I need to mix into a single MP3 file. I originally attempted this entirely with FFMPEG but eventually gave up on the mixing step because I don't believe it it possible to combine audio only tracks into a single result file. I would love to be wrong. I am now using FFMPEG to access the FLV files and extract the audio track to wav so that SOX can mix them. The problem is that I must offset one of the audio tracks by a few seconds so that they are synchronized. Each file is one half of a conversation between a student and a teacher. For example teacher.wav might need to begin 3.3 seconds after student.wav. I can only figure out how to mix the files with SOX where both tracks begin at the same time. My best attempt at this point is: ffmpeg -y -i rtmp://server/appName/instance/student.flv -ac 1 student.wav ffmpeg -y -i rtmp://server/appName/instance/teacher.flv -ac 1 teacher.wav sox -m student.wav teacher.wav combined.mp3 splice 3.3 These tools (FFMEG/SoX) were chosen based on my best research, but are not required. Any working solution would allow an ASP.Net service to input the two FMS flvs and create a combined MP3 using open-source or free tools.

    Read the article

  • using mod-rewrite to redirect requests for jquery.js to GoogleAPI cache

    - by Aditya Advani
    Hi All, Our Linux server with Apache 2.x, Plesk 8.x hosts a number of e-commerce websites. To take advantage of browser caching we would like to use Google's provided copy of jquery.js. Hence in the vhost.conf file of each we can use the following RewriteRule RewriteCond %{REQUEST_FILENAME} jquery.min.js [nc] RewriteRule . http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js [L] And in vhost_ssl.conf RewriteCond %{REQUEST_FILENAME} jquery.min.js [nc] RewriteRule . https://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js [L] OK now these rules work fine in the individual vhost.conf files of each domain. However we host over 200 domains, I would like for them to work but cannot seem to get them to work globally in the httpd.conf file. Challenges are the following: Get the rewriterule to work in httpd.conf Detect if HTTPS is on, and if it is and the is is a secure page, rewrite to ... Each individual domain will still have it's own custom mod-rewrite rules. Which rules take precedence - global or per-domain? Do they combine? Is it ok if I have the "RewriteEngine On" directive in the global httpd.conf and then again in the vhost.conf? Please let me know what your guys' suggestions are. Desperate for a solution to this problem.

    Read the article

  • Have I taken a wrong path in programming by being excessively worried about code elegance and style?

    - by Ygam
    I am in a major stump right now. I am a BSIT graduate, but I only started actual programming less than a year ago. I observed that I have the following attitude in programming: I tend to be more of a purist, scorning unelegant approaches to solving problems using code I tend to look at anything in a large scale, planning everything before I start coding, either in simple flowcharts or complex UML charts I have a really strong impulse on refactoring my code, even if I miss deadlines or prolong development times I am obsessed with good directory structures, file naming conventions, class, method, and variable naming conventions I tend to always want to study something new, even, as I said, at the cost of missing deadlines I tend to see software development as something to engineer, to architect; that is, seeing how things relate to each other and how blocks of code can interact (I am a huge fan of loose coupling) i.e the OOP thinking I tend to combine OOP and procedural coding whenever I see fit I want my code to execute fast (thus the elegant approaches and refactoring) This bothers me because I see my colleagues doing much better the other way around (aside from the fact that they started programming since our first year in college). By the other way around I mean, they fire up coding, gets the job done much faster because they don't have to really look at how clean their codes are or how elegant their algorithms are, they don't bother with OOP however big their projects are, they mostly use web APIs, piece them together and voila! Working code! CLients are happy, they get paid fast, at the expense of a really unmaintainable or hard-to-read code that lacks structure and conventions, or slow executions of certain actions (which the common reasoning against would be that internet connections are much faster these days, hardware is more powerful). The excuse I often receive is clients don't care about how you write the code, but they do care about how long you deliver it. If it works then all is good. Now, did my "purist" approach to programming may have been the wrong way to start programming? Should I just dump these purist concepts and just code the hell up because I have seen it: clients don't really care how beautifully coded it is?

    Read the article

  • XSLT inserting once off custom text

    - by BeraCim
    Hi all: The folloiwng is a pre-existing xml file. I was wondering how can I insert a element before the first element using xslt? <XmlFile> <!-- insert another <tag> element here --> <tag> <innerTag> </innerTag> </tag> <tag> <innerTag> </innerTag> </tag> <tag> <innerTag> </innerTag> </tag> </XmlFile> I was thinking of using a for-each loop and test the position = 0, but upon the first occurence of the for-each its already too late. This is a once-off text so I can't combine it with other xslt templates that are already inside the xsl file. Thanks.

    Read the article

  • Simplifying Jquery code HELP!

    - by user342391
    I am trying to load two modal dialog boxes with Jquery. Both of them load separate pages using ajax. The only problem is that only one of them works. I think I need to simplify my code but am unsure how. <script type="text/javascript"> $(document).ready(function(){ var dialogOpts = { modal: true, bgiframe: true, autoOpen: false, height: 400, width: 550, draggable: true, resizeable: true, title: "Your campaign rates", }; $("#ratesbox").dialog(dialogOpts); //end dialog $('#ratesbutton').click( function() { $("#ratesbox").load("rate_sheet/index.php", [], function(){ $("#ratesbox").dialog("open"); } ); return false; } ); }); </script> <script type="text/javascript"> $(document).ready(function(){ var dialogOptsPass = { modal: true, bgiframe: true, autoOpen: false, height: 400, width: 550, draggable: true, resizeable: true, title: "Change your pasword", }; $("#passwordbox").dialog(dialogOptsPass); //end dialog $('#passwordbutton').click( function() { $("#passwordbox").load("change_password/index.php", [], function(){ $("#passwordbox").dialog("open"); } ); return false; } ); }); </script> Is it posible to combine the two scripts????

    Read the article

  • joining table of oracle

    - by Deven
    Hi friends i am having problem in joining two tables in oracle my two tables are shown bellow table1 looks like id        Name      Jan 7001    Deven    22 7002    Clause    55 7004    Monish    11 table2 looks like id        Name      Feb 7001    Deven    12 7002    Clause    15 7003    Nimesh    20 7004    Monish    21 7005    Ritesh    22 i want to combine this two table and want answer like bellow table2 looks like id        Name      Jan    Feb 7001    Deven    22     12 7002    Clause   55     15 7003    Nimesh    -       20 7004    Monish   11     21 7005    Ritesh    -        22

    Read the article

  • Combining cache methods - memcache/disk based

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • Hiring my first employee

    - by Ady
    A few years ago I moved to a new job having been programming for 2 years using C#, however this new company was mainly using VB6. I made the case for .NET and won, but one of the consessions I had to make was to use VB.NET and not C# (understandable as most of the other developers were already using VB). Three years later it was time to move on, but when applying for jobs I couldn't get past the recruitment agents. I realised that when they were looking at the basic requirements (5 years experience) that they could not add 2 and 3 together to make 5. They were looking for 5 years in VB or C# not across both. Frustrated I decided to combine my skills with a designer friend and start my own company. After two years of hard graft we are now looking for our first employee (a programmer), and this question has hit me again, but now I see the employers perspective. Why take the risk of someone getting up to speed when you have thousands of applicants to choose from. So my question is this, if I define the requirements to be too narrow, I could miss the really great candidates. But if they are too broad it's going to take ages to go through them all. This will be our first 'employee' so the choice needs to be good, I can't afford to make a mistake and employ someone naff. Another option would be to choose a bright university graduate, and train them up (less of a risk because we can pay them less). What have others done in this situation, and what would you recommend I do?

    Read the article

  • Adding items to a combo box's internal list programatically.

    - by Andrew
    So, despite Matt's generous explanation in my last question, I still didn't understand and decided to start a new project and use an internal list. - (void)applicationDidFinishLaunching:(NSNotification *)aNotification { codesList = [[NSString alloc] initWithContentsOfFile: @".../.../codelist.txt"]; namesList = [[NSString alloc] initWithContentsOfFile: @".../.../namelist.txt"]; codesListArray = [[NSMutableArray alloc]initWithArray:[codesList componentsSeparatedByString:@"\n"]]; namesListArray = [[NSMutableArray alloc]initWithArray:[namesList componentsSeparatedByString:@"\n"]]; addTheDash = [[NSString alloc]initWithString:@" - "]; flossNames = [[NSMutableArray alloc]init]; [flossNames removeAllObjects]; for (int n=0; n<=[codesListArray count]; n++){ NSMutableString *nameBuilder = [[NSMutableString alloc]initWithFormat:@"%@", [codesListArray objectAtIndex:n]]; [nameBuilder appendString:addTheDash]; [nameBuilder appendString:[namesListArray objectAtIndex:n]]; [comboBoz addItemWithObjectValue:[NSMutableString stringWithString:nameBuilder]]; [nameBuilder release]; } } So this is my latest attempt at this and the list still isn't showing in my combo box. I've tried using the addItemsWithObjectValues outside the for loop along with the suggestions at this question: Is this the right way to add items to NSCombobox in Cocoa ? But still no luck. If you can't tell, I'm trying to combine two strings from the files with a hyphen in between them and then put that new string into the combo box. There are over 400 codes and matching names in the two files, so manually putting them in would be a huge chore, not to mention, I don't see what would be causing this problem. The compiler shows no warnings or errors, and in the IB, I have it set to use the internal list, but when I run it, the list is not populated unless I do it manually. Some things I thought might be causing it: Being in the applicationDidFinishLaunching: method Having the string and array variables declared as instance variables in the header (along with @property and @synth done to them) Messing around with using appendString multiple times with NSMutableArrays Nothing seems to be causing this to me, but maybe someone else will know something I don't. Thanks for the help.

    Read the article

  • Loading a helper elsewhere than the autoload.php?

    - by drpcken
    I inherited a project and I'm cleaning up a bit and trying to finish it. I noticed that they used (or wrote) a breadcrumb helper. It is in my helpers folder and is named breadcrumb_helper.php It has a single function to build a breadcrumb menu with links and pass it to the view breadcrumbs.php. Here's the code: function show_breadcrumbs() { $ci =& get_instance(); $ci->load->helper('inflector'); $data = ''; //build breadcrumb and store in $data $this->load->view("breadcrumbs", $data) } I was trying to figure out how this helper worked and I checked the autoload.php but there is no reference to the helper in there. In face here is my autoload: $autoload['helper'] = array('url','asset','combine','navigation','form','portfolio','cookie','default'); This show_breadcrumbs() function is used quite a bit in some of my pages so I'm confused as to how its loading if it isn't in the autoloader. It is called like this in a few of my pages: <?=show_breadcrumbs()?> What am I missing? Why isn't this in my autoload? I even did a global search and couldn't find anywhere the helper is being loaded.

    Read the article

  • Asymptotic runtime of list-to-tree function

    - by Deestan
    I have a merge function which takes time O(log n) to combine two trees into one, and a listToTree function which converts an initial list of elements to singleton trees and repeatedly calls merge on each successive pair of trees until only one tree remains. Function signatures and relevant implementations are as follows: merge :: Tree a -> Tree a -> Tree a --// O(log n) where n is size of input trees singleton :: a -> Tree a --// O(1) empty :: Tree a --// O(1) listToTree :: [a] -> Tree a --// Supposedly O(n) listToTree = listToTreeR . (map singleton) listToTreeR :: [Tree a] -> Tree a listToTreeR [] = empty listToTreeR (x:[]) = x listToTreeR xs = listToTreeR (mergePairs xs) mergePairs :: [Tree a] -> [Tree a] mergePairs [] = [] mergePairs (x:[]) = [x] mergePairs (x:y:xs) = merge x y : mergePairs xs This is a slightly simplified version of exercise 3.3 in Purely Functional Data Structures by Chris Okasaki. According to the exercise, I shall now show that listToTree takes O(n) time. Which I can't. :-( There are trivially ceil(log n) recursive calls to listToTreeR, meaning ceil(log n) calls to mergePairs. The running time of mergePairs is dependent on the length of the list, and the sizes of the trees. The length of the list is 2^h-1, and the sizes of the trees are log(n/(2^h)), where h=log n is the first recursive step, and h=1 is the last recursive step. Each call to mergePairs thus takes time (2^h-1) * log(n/(2^h)) I'm having trouble taking this analysis any further. Can anyone give me a hint in the right direction?

    Read the article

  • maven and unit testing - combining maven surefire plugin AND testNG eclipse plugin

    - by lisak
    Hey, could you please share your way of unit testing in eclipse ? Are you using surefire plugin, m2eclipse & maven, or only testNG eclipse plugin ? Do you combine these alternatives ? I'm using testNG + maven surefire-plugin and I had been using the testNG eclipse plugin a year ago so that I could see the results in testNG view. Then I started using Maven, but when I do "maven test phase" using m2eclipse, there is only console output and surefire reports that I can check in browser and to choose what test suite, test, or test method can be set up only via testng.xml. On the other hand, if you use only surefire plugin and you have some specific settings regarding classpath etc., that you rely on, then running tests via testNG eclipse plugin doesn't have to be compatible with your code. Using surefire plugin, the classpath is different - target/test-classes and target/classes - than using testNG plugin, that is using the project classpath. How do you go about what I was just talking about? Is it possible to synchronize "maven test" using m2eclipse and surefire plugin WITH testNG eclipse plugin and view ? EDITED: I'm also wondering, why the Maven project ("Java build path") output folder is target/classes for src/main and src/test whereas surefire plugin makes two locations target/test-classes and target/classes Thank you very much for your your opinions.

    Read the article

  • Facebook new js api and cross-domain file

    - by vondip
    Hi all, I am building a simple facebook iframe application. I've decided since the code is separate from facebook none the less, I will also create a connect website as well. In my connect website I'm trying to figure out the following: I am using facebook's new api and I am calling the init function. I can't seem to figure out where I combine my cross-domain file. There's no mention of it in their documentation either. http://developers.facebook.com/docs/reference/javascript/FB.init I am referring to these lines of code: <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({appId: 'your app id', status: true, cookie: true, xfbml: true}); }; (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; document.getElementById('fb-root').appendChild(e); }()); </script>

    Read the article

  • Performing calculations by subsets of data in R

    - by Vivi
    I want to perform calculations for each company number in the column PERMNO of my data frame, the summary of which can be seen here: > summary(companydataRETS) PERMNO RET Min. :10000 Min. :-0.971698 1st Qu.:32716 1st Qu.:-0.011905 Median :61735 Median : 0.000000 Mean :56788 Mean : 0.000799 3rd Qu.:80280 3rd Qu.: 0.010989 Max. :93436 Max. :19.000000 My solution so far was to create a variable with all possible company numbers compns <- companydataRETS[!duplicated(companydataRETS[,"PERMNO"]),"PERMNO"] And then use a foreach loop using parallel computing which calls my function get.rho() which in turn perform the desired calculations rhos <- foreach (i=1:length(compns), .combine=rbind) %dopar% get.rho(subset(companydataRETS[,"RET"],companydataRETS$PERMNO == compns[i])) I tested it for a subset of my data and it all works. The problem is that I have 72 million observations, and even after leaving the computer working overnight, it still didn't finish. I am new in R, so I imagine my code structure can be improved upon and there is a better (quicker, less computationally intensive) way to perform this same task (perhaps using apply or with, both of which I don't understand). Any suggestions?

    Read the article

  • Swimlane Diagram Softwares with Expand/Collapse Features

    - by louis xie
    I've been searching real hard for a software which can fulfill my needs, but to no avail. I have a swimlane diagram which is extremely huge, and almost impossible to model using Visio or any traditional swimlane software. I would need to model both the operational process, as well as the interactions within an application and between different applications. Therefore, without wasting additional effort modelling these separately, I am looking for a solution which I can combine both views together. That is, possibly one which I can expand/collapse/group/ungroup processes/subprocesses together. Take a typical credit card process for instance, a hypothetical description of the swimlane could be as such: Customer submits application form to the bank Bank Officer A receives the application form and validates that it was correctly filled Bank Officer A submits application form to Bank Officer B for processing. Bank Officer B checks credit quality of the customer through Application X. Application X submits query to Application Y to retrieve Credit Report. Application X retrieves credit report and submits to Application Z for computation of credit scores Bank Officer B validates that customer is credit worthy, and submits application to Bank Officer C for processing. The above is an over-simplified credit card request process, and a purely hypothetical one. What I'm trying to drive at is, each of the above processes have sub-processes, and I want to be able to switch between a "detailed" view and "aggregated" view. If possible, add in time dependency of the different tasks, as well. I haven't been able to find one such software which could do this.

    Read the article

  • how to apply iddata into calculation?

    - by Sam
    I am trying to figure out how to combine the input and output data into the ARX model and then apply it into the BIC (Bayesian Information Criterion) formula. Below is the code that I am currently working on: for i=1:30; %% Set Model Order data=iddata(output,input,1); model = arx(data,[8 9 i]); yp = predict(model,data); ye = regress(data,yp{1,1}(1:4018,1)); M(i) = var(yp); BIC(i)=(N+i*(log(N)-1))/(N-i)*log(M(i)); end But it does not work. It keep on giving me an error something like below: "The syntax "Data{...}" is not supported. Use the "getexp" command to extract individual experiments from an IDDATA object." I did not understand what does that mean. Can someone explain it to me and where do I do wrong on my piece of code? Thanks in advance. Sam.

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >