Search Results

Search found 67 results on 3 pages for 'sneaky wombat'.

Page 1/3 | 1 2 3  | Next Page >

  • IIS 7’s Sneaky Secret to Get COM-InterOp to Run

    - by David Hoerster
    Originally posted on: http://geekswithblogs.net/DavidHoerster/archive/2013/06/17/iis-7rsquos-sneaky-secret-to-get-com-interop-to-run.aspxIf you’re like me, you don’t really do a lot with COM components these days.  For me, I’ve been ‘lucky’ to stay in the managed world for the past 6 or 7 years. Until last week. I’m running a project to upgrade a web interface to an older COM-based application.  The old web interface is all classic ASP and lots of tables, in-line styles and a bunch of other late 90’s and early 2000’s goodies.  So in addition to updating the UI to be more modern looking and responsive, I decided to give the server side an update, too.  So I built some COM-InterOp DLL’s (easily through VS2012’s Add Reference feature…nothing new here) and built a test console line app to make sure the COM DLL’s were actually built according to the COM spec.  There’s a document management system that I’m thinking of whose COM DLLs were not proper COM DLLs and crashed and burned every time .NET tried to call them through a COM-InterOp layer. Anyway, my test app worked like a champ and I felt confident that I could build a nice façade around the COM DLL’s and wrap some functionality internally and only expose to my users/clients what they really needed. So I did this, built some tests and also built a test web app to make sure everything worked great.  It did.  It ran fine in IIS Express via Visual Studio 2012, and the timings were very close to the pure Classic ASP calls, so there wasn’t much overhead involved going through the COM-InterOp layer. You know where this is going, don’t you? So I deployed my test app to a DEV server running IIS 7.5.  When I went to my first test page that called the COM-InterOp layer, I got this pretty message: Retrieving the COM class factory for component with CLSID {81C08CAE-1453-11D4-BEBC-00500457076D} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)). It worked as a console app and while running under IIS Express, so it must be permissions, right?  I gave every account I could think of all sorts of COM+ rights and nothing, nada, zilch! Then I came across this question on Experts Exchange, and at the bottom of the page, someone mentioned that the app pool should be running to allow 32-bit apps to run.  Oh yeah, my machine is 64-bit; these COM DLL’s I’m using are old and are definitely 32-bit.  I didn’t check for that and didn’t even think about that.  But I went ahead and looked at the app pool that my web site was running under and what did I see?  Yep, select your app pool in IIS 7.x, click on Advanced Settings and check for “Enable 32-bit Applications”. I went ahead and set it to True and my test application suddenly worked. Hope this helps somebody out there from pulling out your hair.

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Sneaky Javascript For Loop Bug

    - by Liam McLennan
    Javascript allows you to declare variables simply by assigning a value to an identify, in the same style as ruby: myVar = "some text"; Good javascript developers know that this is a bad idea because undeclared variables are assigned to the global object, usually window, making myVar globally visible. So the above code is equivalent to: window.myVar = "some text"; What I did not realise is that this applies to for loop initialisation as well. for (i = 0; i < myArray.length; i += 1) { } // is equivalent to for (window.i = 0; window.i < myArray.length; window.i += 1) { } Combine this with function calls nested inside of the for loops and you get some very strange behaviour, as the value of i is modified simultaneously by code in different scopes. The moral of the story is to ALWAYS declare javascript variables with the var keyword, even when intialising a for loop. for (var i = 0; i < myArray.length; i += 1) { }

    Read the article

  • The case of the sneaky backslash - Regex

    - by Shane Cusson
    I'm missing something very obvious here, but I just cant see it. I've got: string input = @"999\abc.txt"; string pattern = @"\\(.*)"; string output = Regex.Match(input,pattern).ToString(); Console.WriteLine(output); My result is: \abc.txt I don't want the slash and cant figure out why it's sneaking into the output. I tried flipping the pattern, and the slash winds up in the output again: string pattern = @"^(.*)\\"; and get: 999\ Strange. The result is fine in Osherove's Regulator. Any thoughts? Thanks.

    Read the article

  • Hijax == sneaky Javascript redirects? Will I get banned from Google?

    - by Chris Jacob
    Question Will I get penalised as "sneaky Javascript redirects" by Google if I have the following Hijax setup (which requires a JavaScript redirect on the page indexed by google). Goal I want to implement Hijax to enable AJAX content to be accessibile to non-JavaScript users and search engine crawlers. Background I'm working on a static file server (GitHub Pages). No server side tricks allowed (so Google's #! "hash bang" solution is not an option). I'm trying to keep my files DRY. I don't want to repeat the common OUTER template in all my files i.e. header, navigation menu, footer, etc They will live in the main index.html Setup the Hijax index.html page contains all OUTER html/css/js... the site's template. index.html has a <div id="content"> which defaults to containing the "homepage" html. index.html has a navigation menu, with a Hijax link to an "about" page. With JavaScript disabled (e.g. crawler) it follows link to /about.html. With JavaScript enabled (e.g. most people) the link updates the url hash fragment to /#about and jQuery replaces the <div id="content"> innerHTML with $("#content").load("about.html #inner-container");. AJAX content about.html does not contain anything extra to try an cloak content for crawlers. about.html file contains enough HTML / CSS / JavaScript to display /about.html as a standalone page with it's own META data... e.g. <html><head><title>About</title>...</head><body></body></html>. about.html has NO OUTER HTML template (i.e. header, navigation menu, footer, etc). about.html <body> contains a <div id="inner-container"> which holds the content that is injected into index.html. about.html has a <noscript> tag as the first child of <body> which explains to non-JavaScript users that they are viewing the about page "inner content" - with a link to navigate to the index.html page to get the full page layout with menu. The (Sneaky?) Redirect Google indexes the /about.html page. However when a person with JavaScript enabled visits that page there is no OUTER html template (e.g. header, navigation menu, footer, etc). So I need to do a JavaScript redirect to get the person over the /#about page (deeplinking to the "about" page "state" in index.html). I'm thinking of doing a "redirect on click or after 10 seconds". The end results is that user ends up on an "enhanced" page back on index.html with all it's OUTER template - but the core "page" content is practically identical. Known issue with inbound links e.g. Share / Bookmarking It seems that if a user shares the URL /#about on their blog, when allocating inbound links to my site Google ignores everything after the # ... it allocates value to the / page - See: http://stackoverflow.com/questions/5028405/hashbang-vs-hijax/5166665#5166665. I can only try an minimise this issue offering "share" buttons on the page with the appropriate urls i.e. /about.html. Duplicate Sorry. I posted this same question over on http://stackoverflow.com/questions/5561686/hijax-sneaky-javascript-redirects-will-i-get-banned-from-google ... then realised it probably belongs more on this Stack Exchange site... Not sure if I should delete the Stack Overflow question? Or just leave it on both sites? Please leave comment.

    Read the article

  • Creating a list in Python- something sneaky going on?

    - by GlenCrawford
    Apologies if this doesn't make any sense, I'm very new to Python! From testing in an interpreter, I can see that list() and [] both produce an empty list: >>> list() [] >>> [] [] From what I've learned so far, the only way to create an object is to call its constructor (__init__), but I don't see this happening when I just type []. So by executing [], is Python then mapping that to a call to list()?

    Read the article

  • Using awk to split text file every 10,000 lines

    - by Sneaky Wombat
    I have a large gzip'd text file. I'd like to something like: zcat BIGFILE.GZ | awk (snag 10,000 lines and redirect to...)|gzip -9 smallerPartFile.gz the awk part up there, I basically want it to take 10,000 lines and send it to gzip and then repeat until all lines in the original input file are consumed. I found a script that claims to do this, but when I run it on my files and then diff the original to the ones that were split and then merged, lines are missing. So, something is wrong with the awk part and I'm not sure what part is broken. Here's the code. Can someone tell me why this doesn't yield a file that can be split and merged and then diff'd to the original successfully? # Generate files part0.dat.gz, part1.dat.gz, etc. # restore with: zcat foo* | gzip -9 > restoredFoo.sql.gz (or something like that) prefix="foo" count=0 suffix=".sql" lines=10000 # Split every 10000 line. zcat /home/foo/foo.sql.gz | while true; do partname=${prefix}${count}${suffix} # Use awk to read the required number of lines from the input stream. awk -v lines=${lines} 'NR <= lines {print} NR == lines {exit}' >${partname} if [[ -s ${partname} ]]; then # Compress this part file. gzip -9 ${partname} (( ++count )) else # Last file generated is empty, delete it. rm -f ${partname} break fi done

    Read the article

  • how to save canvas as png image?

    - by sneaky
    I have a canvas element with a drawing in it, and I want to create a button that when clicked on, it will save the image as a png file. So it should open up the save, open, close dialog box... I do it using this code var canvas = document.getElementById("myCanvas"); window.open(canvas.toDataURL("image/png")); But when I test it out in IE9, a new window opens up saying "the web page cannot be displayed" and the url of it is: data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAmAAAABpCAYAAACd+58xAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAADRwSURBVHhe7V0HgBVF0q7ZJSlJwRxAPUFRD39RFLN34qGnnomoqIjhzBlFPROIgsoZzogR5AQEzJ4BPFAUEUwgJjgQkCQKooggaf/6el6/6ZnpSW/zUn237u5MdXX1172+z6rqaqeEG6VsJet+pDW/vkdrfx3H3yfT2tVzaP26X6hkw1q/BoeI/280/29JwznZxJPUyXtBQBAQBAQBQaBcECjefmi56BWlZYtAeqajx/WokfrJYEqOIikOFRfXoVq161PtOk2odu0t+ectqKiojnrntWhC46QhYOtXfUarl79Ivy9/ldau+h+tX/8b2SbjJ07pWFQy2Uqnp2yXSrQJAoKAICAICALJCAgBS8ao8iTi/UvpSZlBxiwqwWOKimsxCWtEdettR3XqbEfFtRonkrBYArZh3fe0+senadWyEbRm1UzasP53K45ZiFc84RKyVXkbVUYWBAQBQUAQyIqAELCsiFW2fDQpSyZk8UQMMysqKmaPWCOqt8mOVK/eTuQU1YuccCQBW7dyIq38/j5a/fN/ad3a5QGXmqsvLfGKJl0ZCFdONEOPyl5lGV8QEAQEAUGghiMgBKzqLHCeHqVOrILtduF4MpZExEqoVq167A3bmjap34J/bmoFqcj2dM3Pr9GK+b1p5bJXmXz9nEC+3FiorYF4hcmXlo+gUrk+um/+e+QoVWfxxRJBQBAQBAQBQUAQqBwE8uzCwiMiaEqOWYR5jJ2/6Hl58nYHk0Pr1v1Ov62cTytXfE5r1yyyc6RgDhjI16+L+tLqFZ9xnteGUKc0Xi876YpYkFDCfpqFEz9YGpRERhAQBAQBQaB8ERAPWPnim157JreX6/eK7eJ/Ge0RS8oPc9gT1pTqN2jFocltfdPxecBU2HHxgBTky+71CjPGCO+YyU4j0TU9ZcGf0y+JSAoCgoAgIAgIAoJATUcgG2dQ0jpKZ/Xp+PlLtEcs3huGQhO/r17K3rBvuGrEUt8i5HPAkHC/Yt6lKuwYKiuRM9TtGbY0rcerTBLwC/KY1fSNJ/MTBAQBQUAQqAwExANWGaiHx0z2aJl9CknET+MRc2XsVSKKaNP6O1CDhvtwYn5dJZf3gOG0IxLuS0e+LB4vw9sVhiwmHywmF6xqLLdYIQgIAoKAICAICAJVAQHTo2XmkNtzv6K5R7RXLOwRs3MaW+47SNkG9oRxZQmun6qbImCo84VSE+5pR3/zvFZJni/7e8tTHiCeqCkAqsKKig2CgCAgCAgCgoAgUG0RCBKzMLmwkzHdL4pk4XlUAn7Uu3XrVtPqVfO4lurPSq0KQa5cdAv9snBgqM5XFPlKSsSPM8o3mcxES2hZtf0rEMMFAUFAEKiBCEgIsqotavpk/OiwZViHPQk/LgHfHo5EnbCGjVpxeYo9yNmw9oeSn2YeTatWTPNzozzX8ZOeWPJlJVRJnrOoxUtJtmLEUmqoartH7BEEBAFBQBCoJggIASu/hUqkUokC2rZkwSSCBU12wpadhNXbpAk13vwwKlJ3O/L1QmYrxPNlDxuGyVtyIn6K8KSZHwY3XsRX+W0L0VyTEPjwM6KuFxN1uiB6Vkv48MpdjxLt9ieieQvscuvWE13eh2jnQ9zv+F2aICAICAKCQGEIRH22559H5IqnDTMGeU/SgUJ7WNLjOVH9g8/Xrv2Fa4P9SM6K7y4t+WXRw76sfVc42fPV936iJ5+LKrbqTU0P3rA+0dZbEh20L9HJHYj23I2oVrHFT5U5NFnY4m6svUA0Pvo8efbvjCBqtr0nN+IVot4Dkvv1v5aoy/HJcpUpsfI3olffdknV0uWuJfv9kWjkw55VIFAfM073PE4EkqZbEBf9HMTs8C6e3KhHiPbdqzJnKWMLAoJAeSMgHrDyRrj0+rOEGvVoYY9Y0inIeE+YqQ+cqGGj3dgDtvJjC/mKm7BHmG68lGjO+0T Anyone know how to fix this?

    Read the article

  • Connection problem between redmine and svn

    - by WombaT
    I have server controlled by Debian 6.0. I installed and configured redmine some time ago, and now configured svn server. Now i'm trying to configure redmine to be able to view svn repository. URL is: https://192.168.11.78/svn/bee Connection is not working, log show this error: Error parsing svn output: #<REXML::ParseException: No close tag for /lists/list> Google says that its common error, and its possible to fix it by permamently accept of server certificate so i did it and nothing. Still dont work. Later, i added [global] store-plaintext-passwords = no in file .subversion/servers I did this (and cert accept) for both root and www-data users. Nothing helped, still got error in redmine The entry or revision was not found in the repository. What else i can do with it?

    Read the article

  • Dynamic procmail filters

    - by WombaT
    i need procmail to place incoming mail into specific folder depending on some set of rules. I know how i can accomplish this, but i need to write static set of rules in a specific file. What i really need is to configure procmail to use rules stored in mysql database. How i can do this? I've read a bit about that and one solution i found is to pipe message to a php/perl script and return a folder name to place message. But i have completely no i idea how to use php script as a rule and then use its return value.

    Read the article

  • Any way to use a class extension method to support an interface method in C#?

    - by dudeNumber4
    Console app below compiles, but the interface cast fails at run time. Is there an easy way to make this work? namespace ConsoleApplication1 { class Monkey { public string Shock { get { return "Monkey has been shocked."; } } } static class MonkeyExtensionToSupportIWombat { public static string ShockTheMonkey( this Monkey m ) { return m.Shock; } } interface IWombat { string ShockTheMonkey(); } class Program { static void Main( string[] args ) { var monkey = new Monkey(); Console.WriteLine( "Shock the monkey without the interface: {0}", monkey.Shock ); IWombat wombat = monkey as IWombat; Console.WriteLine( "Shock the monkey with the interface: {0}", wombat.ShockTheMonkey() ); Console.ReadLine(); } } }

    Read the article

  • How to store Ruby method references in a database?

    - by Mad Wombat
    I am writing my first rails app. It needs to aggregate some data from multiple sites and for each site I have a unique way of getting the data (some provide RSS, some JSON, for some I scrape the HTML etc.). These will run on schedule, probably as a rake task from cron. It seems logical to store the sites and relevant information in a model, but I am not sure where to put unique data retrieval methods. Do I store method names in the model? Do I just name the methods the same as site name and call them that way? Basically, I need a way to read a list of sites and call appropriate method for each site. What is the Ruby on Rails way to do it?

    Read the article

  • Problems upgrading MySQL application from Delphi 2006 to 2010

    - by WombaT
    I upgraded my Delphi to 2010 version and I tried to open and run application written in Delphi 2006. The app is using mysql by dbexpress with libmysql.dll and a second driver found somewhere on the Internet. I can't run it on 2010. I'm always getting "missing libmysql.dll library". I tried to get new version of it but it didn't help. Copying this library into almost all system directories didn't help. I'm out of any ideas what to do, how do I connect to database :(

    Read the article

  • How to reconnect to slime/swank-clojure session?

    - by Mad Wombat
    It seems that whenever I disconnect from clojure slime session, I cannot reconnect again. I am using leiningen to start the swank session (with lein-swank plugin). So, every time I quit emacs (I know I shouldn't) or reboot/logout I have to restart both slime and swank. Is there a way to re-connect to a slime/clojure-swank session?

    Read the article

  • How do I stop jetty server in clojure?

    - by Mad Wombat
    I am writing a web application using ring and clojure. I am using the jetty adapter for the development server and emacs/SLIME for IDE. While wrap-reload does help, run-jetty blocks my slime session and I would like to be able to start/stop it at will without having to run it in a separate terminal session. Ideally, I would like to define a server agent and functions start-server and stop-server that would start/stop the server inside the agent. Is this possible?

    Read the article

  • Working with arrays of lists pattern in java

    - by Mad Wombat
    I am writing a card game in java where I need to spread cards from a deck into several columns until I have fixed amount of cards left. This is how I do this. public class Column extends ArrayList {} List deck = Cards.createNewDeck(); Column[] columns = new Column[10]; int c = 0; while (deck.size() 50) { if (c == 10) { c = 0; } if (columns[c] == null) { columns[c] = new Column(); } columns[c].add(Cards.dealTopCard(deck)); c += 1; } This somehow seems clunky. Is there a more readable/comprehensive way of doing the same thing?

    Read the article

  • Clojure and NoSQL databases

    - by Mad Wombat
    I am currently trying to pick between different NoSQL databases for my project. The project is being written in clojure and javascript. I am currently looking at three candidates for storage. What are the relative strengths and weaknesses of MongoDB, FleetDB and CouchDB? Which one is better supported in Clojure? Which one is better supported under Linux? Did I miss a better product (has to be free and OSS)?

    Read the article

  • Make emacs autocomplete Ruby methods

    - by Mad Wombat
    Is there a way to make emacs pull autocompletions of ruby methods the way Eclipse and NetBeans do? That is if I type File. and press CTRL-space in Eclipse I will get a list of File methods. Same with variables. I have installed autocomplete plugin, ruby-mode, rinari and cedet, but so far it will complete local variable and method names, but will not native ones.

    Read the article

  • How to make emacs properly indent if-then-else construct in elisp

    - by Mad Wombat
    When I indent if-then-else construct in emacs lisp, the else block doesn't indent properly. What I get is: (defun swank-clojure-decygwinify (path) "Convert path from CYGWIN UNIX style to Windows style" (if (swank-clojure-cygwin) (replace-regexp-in-string "\n" "" (shell-command-to-string (concat "cygpath -w " path))) (path))) where else form is not indented at the same level as the then form. Is there an obvious way to fix this?

    Read the article

  • How to select nth element of particular type in enlive?

    - by Mad Wombat
    I am trying to scrape some data from a page with a table based layout. So, to get some of the data I need to get something like 3rd table inside 2nd table inside 5th table inside 1st table inside body. I am trying to use enlive, but cannot figure out how to use nth-of-type and other selector steps. To make matters worse, the page in question has a single top level table inside the body, but (select data [:body : :table]) returns 6 results for some reason. What the hell am I doing wrong?

    Read the article

  • How to make emacs stay in the current directory

    - by Mad Wombat
    When I start working on a project in emacs, I use M-x cd to get into the project root directory. But every time I use C-x C-f to open a file in one of the subdirectories (like app/model/Store.rb) emacs changes current directory to that of the file. Is there a way to make emacs stay at the root?

    Read the article

  • More then one emacs terminal

    - by Mad Wombat
    I am getting more and more used to doing everything from inside emacs, but it seems that eshell, shell and term will only run one instance each. Is there a way to run multiple terminals (preferably term) inside emacs?

    Read the article

  • More than one emacs terminal

    - by Mad Wombat
    I am getting more and more used to doing everything from inside emacs, but it seems that eshell, shell and term will only run one instance each. Is there a way to run multiple terminals (preferably term) inside emacs?

    Read the article

  • How to replace a region in emacs with yank buffer contents?

    - by Mad Wombat
    When I use VIM or most modeless editors (Eclipse, NetBeans etc.) I frequently do the following. If I have similar text blocks and I need to change them all, I will change one, copy it (or use non-deleting yank), select next block I need and paste the changed version over it. If I do the same thing in emacs (select region and paste with C-y), it doesn't replace the region, it just pastes at the cursor position. What is the way to do this in emacs?

    Read the article

1 2 3  | Next Page >