Search Results

Search found 16560 results on 663 pages for 'far'.

Page 4/663 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • What is best practice as far as using perl-isms (idiomatic expressions) in Perl?

    - by DVK
    A couple of years back I participated in writing the best practices/coding style for our (fairly large and often Perl-using) company. It was done by a committee of "senior" Perl developers. As anything done by consensus, it had parts which everyone disagreed with. Duh. The part that rubbed wrong the most was a strong recommendation to NOT use many Perlisms (loosely defined as code idioms not present in, say C++ or Java), such as "Avoid using '... unless X;' constructs". The main rationale posited for such rules as this one was that non-Perl developers would have much harder time with the Perl code base otherwise. The assumption here I guess is that Perl code jockeys are rarer breed overall - and among new hires to the company - than non-Perlers. I was wondering whether SO has any good arguments to support or reject this logic... it is mostly academic curiosity at this point as the company's Perl coding standard is ossified and will never be revised again as far as I'm aware. P.S. Just to be clear, the question is in the context I noted - the answer for an all-Perl smaller development shop is obviously a resounding "use Perl to its maximum capability".

    Read the article

  • What arguments to use to explain why a SQL DB is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from MS SQL server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be espically when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurent access Performance with large ammounts of data Ammount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Taking "do the simplest thing that could possible work" too far in TDD: testing for a file-name kno

    - by Support - multilanguage SO
    For TDD you have to Create a test that fail Do the simplest thing that could possible work to pass the test Add more variants of the test and repeat Refactor when a pattern emerge With this approach you're supposing to cover all the cases ( that comes to my mind at least) but I'm wonder if am I being too strict here and if it is possible to "think ahead" some scenarios instead of simple discover them. For instance, I'm processing a file and if it doesn't conform to a certain format I am to throw an InvalidFormatException So my first test was: @Test void testFormat(){ // empty doesn't do anything nor throw anything processor.validate("empty.txt"); try { processor.validate("invalid.txt"); assert false: "Should have thrown InvalidFormatException"; } catch( InvalidFormatException ife ) { assert "Invalid format".equals( ife.getMessage() ); } } I run it and it fails because it doesn't throw an exception. So the next thing that comes to my mind is: "Do the simplest thing that could possible work", so I : public void validate( String fileName ) throws InvalidFormatException { if(fileName.equals("invalid.txt") { throw new InvalidFormatException("Invalid format"); } } Doh!! ( although the real code is a bit more complicated, I found my self doing something like this several times ) I know that I have to eventually add another file name and other test that would make this approach impractical and that would force me to refactor to something that makes sense ( which if I understood correctly is the point of TDD, to discover the patterns the usage unveils ) but: Q: am I taking too literal the "Do the simplest thing..." stuff?

    Read the article

  • Why is IE7 hiding my overflow when, as far as I can tell, all it's containing elements have overflow

    - by dougoftheabaci
    If you visit the site in question (haddongrant.com) and go to the Artwork section, if you click on an image and view it's stack in Safari, Chrome or Firefox you'll notice the images extend up and down the page, eventually disappearing over the edge. This is what you should be seeing. In Internet Explorer 7, however, the overflow gets cut off at some point before it ever gets to the end of the page. The problem is... I can't tell where! I've had a look and every containing element should show overflow. I don't know why IE7 isn't. Does anyone have any ideas where I might need to add an overflow-y:visible;?

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • ASP.NET Memory Usage in IIS is FAR greater than in DevEnv. Is this normal?

    - by Tom
    Greetings! I have an ASP.NET app that scrapes data from a handful of external pages, parses the relevant bits and displays them in a table. Total data retrieved is 3-4MB and the resulting page is about 1MB. I am using synchronous WebRequest GetResponse for the retrieval, but the same problem existed using an asynchronous BeginGetResponse/EndGetResponse process. There is no database access, no session storage, no caching, but an in-memory list of about 100 objects (total 1MB of data), plus a good amount of AJAX (AjaxControlToolkit). This issue appears on the very first run of the app, even if I have restarted IIS. The issue: When I run the app on my dev computer, the maximum commit charge is about 1.5GB. The biggest user, measured by Task Manager's VM Size, is WebDev.WebServer.exe (600MB). The app runs perfectly. When I run it on my rent-a-server (IIS 7.5, 1GB RAM), the maximum commit charge is over 3.8GB. The biggest user is w3wp.exe at 2.7GB. IIS grinds to a halt and spits out a timed-out error page. Given my limited server budget and the hope of having multiple simultaneous users, I'm kind of in a panic. Is this normal? If I bump the server RAM up to 4GB, will that be enough? Will multiple users require even more memory? Could the culprit be AJAX or the list of objects? Thanks for any insight you can provide.

    Read the article

  • How to create a controllable flip effect (flip as far (and fast) as you drag) of an UIView

    - by allisone
    I have a card (a flashcard you could say) What I can do: On fingersweeping over the flashcard the card gets turned. It's happening by detecting touchesBegan and touchesMoved and then I do stuff like [UIView beginAnimations:@"View Flip" context:nil]; [UIView setAnimationDuration:0.5]; [UIView setAnimationCurve:UIViewAnimationCurveEaseInOut]; if (left) { [UIView setAnimationTransition: UIViewAnimationTransitionFlipFromLeft forView:self.flashCardView cache:YES]; }else{ [UIView setAnimationTransition: UIViewAnimationTransitionFlipFromRight forView:self.flashCardView cache:YES]; } What I can't do (but want to): I want to somehow drag the flip. Imagine this, you have a flashcard and you think you know the answer, you start flipping the card around because you want to see if you are correct, but then... no stop... you hesitate, turn back the card to the way it was, rethink, get the real answer and then finally flip to see that you were right to hesitate. Now I only have: once you start flipping, you can't stop it.

    Read the article

  • Session hijacking prevention...how far will my script get me? additional prevention procedures?

    - by Yusaf Khaliq
    When the user logs in the current session vairables are set $_SESSION['user']['timeout'] = time(); $_SESSION['user']['ip'] = $_SERVER['REMOTE_ADDR']; $_SESSION['user']['agent'] = $_SERVER['HTTP_USER_AGENT']; In my common.php page (required on ALL php pages) i have used the below script, which resets a 15 minute timer each time the user is active furhtermore checks the IP address and checks the user_agent, if they do not match that as of when they first logged in/when the session was first set, the session is unset furthermore with inactivity of up to 15 minutes the session is also unset. ... is what i have done a good method for preventing session hijacking furthermore is it secure and or is it enough? If not what more can be done? if(!empty($_SESSION['user'])){ if ($_SESSION['user']['timeout'] + 15 * 60 < time()) { unset($_SESSION['user']); } else { $_SESSION['user']['timeout'] = time(); if($_SESSION['user']['ip'] != $_SERVER['REMOTE_ADDR']){ unset($_SESSION['user']); } if($_SESSION['user']['agent'] != $_SERVER['HTTP_USER_AGENT']){ unset($_SESSION['user']); } } }

    Read the article

  • XMLTask 1.16: Does it work with Ant so far?

    - by noelg103
    I'm trying to use XMLTask 1.16, but, unfortunately, I got error, java.lang.UnsupportedClassVersionError: Bad version number in .class file, all the time. But if I switch back to XMLTask 1.15, it work fine. Does anyone know how to make XMLTask 1.16 work with Ant.

    Read the article

  • Need to exclude results in a MySQL query where two table fields are not of certain values (brain far

    - by DondeEstaMiCulo
    I don't know if I'm just burnt out and can't think, or what... But I can't seem to make this work right... (We're using MySQL 5.1...) I have two tables which have some transactional stuff stored in them. There will be many records per user_id in each table. Table1 and Table2 have a one-to-one relationship with each other. I want to pull records from both tables, but I want to exclude records which have certain values in both tables. I don't care if they both don't have these values, or if just one does, but both tables should not have both values. (Does this make any sense? lol) For example: SELECT t1.id, t1.type, t2.name FROM table1 t1 INNER JOIN table2 t2 ON table.xid = table2.id WHERE t1.user_id = 100 AND (t1.type != 'FOO' AND t2.name != 'BAR') So t1.type is type ENUM with about 10 different options, and t2.name is also type ENUM with 2 options. My expected results would look a little like: 1, FOO, YUM 2, BOO, BAR 3, BOO, YUM But instead, all I'm getting is: 3, BOO, YUM Because it's filtering out all records which has 'FOO' as the type, and 'BAR' as the name. I keep waiting for that D'oh! moment where it hits me and I feel like an idiot for not realizing what I'm doing wrong. But it hasn't come. And I still feel like an idiot, lol. I appreciate any light any of you can shed on this! Many thanks in advance for the help!

    Read the article

  • How to get the measures of an object from far away?

    - by Luis Armando
    I am working on an App that intends to give an accurate measure of any object (building, desk, chair, people, etc.) Using the camera (either phone's or laptop) but I'm unsure as to how to do this without using a lot of resources, would someone mind giving me some options? I'm looking for a lightweight one that can be quickly processed by the computer/phone to give back the measures.

    Read the article

  • How to place a symbol (path) relative to the far end of svg text?

    - by dugeen
    I'm working on a program which generates SVG maps. Some of the map items have captions which need a symbol after them (like a plane symbol for an airport caption). If I have a text element thus <text x="30" y="30">Pericles</text> I can place another bit of text at the next character position by saying <text x="30" y="30">Pericles <tspan>!</tspan></text> but I'd like to draw my own symbol at that position with a <path> element. What I'm doing at the moment is having the generating program guess the extent of the text from tables of font metrics etc, but this isn't accurate enough to place the symbol consistently. Is there any way round this - like specifying a <marker> to be used when drawing the text, and using a tspan with an invisible dash in it or something to get the marker placed?

    Read the article

  • How do I make a DIV stay at the top of the screen no matter ow far down the page my visitor scrolls?

    - by user1289863
    What I'm saying is, I have some extremely long pages on my website, which can make it annoying I've my visitors need to scroll to the top of the page to be able to navigate to another page. I'm not quite sure what I would call this but any Google search that contains the words 'DIV' and 'float' come up with completely unrelated results... What I'm looking to do is create a DIV that stays at the top of the Screen (not to be confused with the page) so that if the user is at the bottom of the page, they can still see the navigation bar just floating at the top of the screen. What I can think of is to position the DIV relative to the position of the screen but I don't know how to code this. I'm happy to use JavaScript (preferably in the form of jQuery), but if you know how to do this using CSS, I would favour your response. This might help: I know a little bit of jQuery and JavaScript and I know a good deal of CSS and HTML. Thanks in advance.

    Read the article

  • Website with over 1 million posts with not much textual content

    - by Far Se
    I've made a website which crawls files from all over the Internet and I feel like Google will ban me if I sent it sitemaps which contain all of these pages (1m+), because they contain only the file name/size/no of downloads and the download link(s). I'm considering this thought because I've made another website like this in the past and Google banned me after one week with the reason: "spam", even it was not (maybe somebody falsely reported me?!). Does someone have an idea about how to keep Google form banning my website? I've seen several other sites like mine and they don't get banned or... anything. And also, should I sent the sitemap or wait until Google indexes every page as it finds them? Thanks in advance :)

    Read the article

  • The JRockit Performance Counters

    - by Marcus Hirt
    Every now and then I get a question regarding what the attributes in the PerfCounters dynamic MBean represent. Now, all the MBeans under the oracle.jrockit.management (bea.jrockit.management pre R28) domain are part of what we call JMXMAPI (the JRockit JMX based Management API), which is unsupported. Therefore there is no official documentation for the API. I did however write a bit about JMXMAPI in my recent JRockit book, Oracle JRockit: The Definitive Guide. The information in the table below is from that book: Counter Description java.cls.loadedClasses The number of classes loaded since the start of the JVM. java.cls.unloadedClasses The number of classes unloaded since the start of the JVM. java.property.java.class.path The class path of the JVM. java.property.java.endorsed.dirs The endorsed dirs. See the Endorsed Standards Override Mechanism. java.property.java.ext.dirs The ext dirs, which are searched for jars that should be automatically put on the classpath. See the Java documentation for java.ext.dirs. java.property.java.home The root of the JDK or JRE installation. java.property.java.library.path The library path used to find user libraries. java.property.java.vm.version The JRockit version. java.rt.vmArgs The list of VM arguments. java.threads.daemon The number of running daemon threads. java.threads.live The total number of running threads. java.threads.livePeak The peak number of threads that has been running since JRockit was started. java.threads.nonDaemon The number of non-daemon threads running. java.threads.started The total number of threads started since the start of JRockit. jrockit.gc.latest.heapSize The current heap size in bytes. jrockit.gc.latest.nurserySize The current nursery size in bytes. jrockit.gc.latest.oc.compaction.time How long, in ticks, the last compaction lasted. Reset to 0 if compaction is skipped. jrockit.gc.latest.oc.heapUsedAfter Used heap at the end of the last OC, in bytes. jrockit.gc.latest.oc.heapUsedBefore Used heap at the start of the last OC, in bytes. jrockit.gc.latest.oc.number The number of OCs that have occurred so far. jrockit.gc.latest.oc.sumOfPauses The paused time for the last OC, in ticks. jrockit.gc.latest.oc.time The time the last OC took, in ticks. jrockit.gc.latest.yc.sumOfPauses The paused time for the last YC, in ticks. jrockit.gc.latest.yc.time The time the last YC took, in ticks. jrockit.gc.max.oc.individualPause The longest OC pause so far, in ticks. jrockit.gc.max.yc.individualPause The longest YC pause so far, in ticks. jrockit.gc.total.oc.compaction.externalAborted Number of aborted external compactions so far. jrockit.gc.total.oc.compaction.internalAborted Number of aborted internal compactions so far. jrockit.gc.total.oc.compaction.internalSkipped Number of skipped internal compactions so far. jrockit.gc.total.oc.compaction.time The total time spent doing compaction so far, in ticks. jrockit.gc.total.oc.ompaction.externalSkipped Number of skipped external compactions so far. jrockit.gc.total.oc.pauseTime The sum of all OC pause times so far, in ticks. jrockit.gc.total.oc.time The total time spent doing OC so far, in ticks. jrockit.gc.total.pageFaults The number of page faults that have occurred during GC so far. jrockit.gc.total.yc.pauseTime The sum of all YC pause times, in ticks. jrockit.gc.total.yc.promotedObjects The number of objects that all YCs have promoted. jrockit.gc.total.yc.promotedSize The total number of bytes that all YCs have promoted, in bytes. jrockit.gc.total.yc.time The total time spent doing YC, in ticks. oracle.ci.jit.count The number of methods JIT compiled. oracle.ci.jit.timeTotal The total time spent JIT compiling, in ticks. oracle.ci.opt.count The number of methods optimized. oracle.ci.opt.timeTotal The total time spent optimizing, in ticks. oracle.rt.counterFrequency Used to convert ticks values to seconds. Note that many of these counters are excellent choices for attributes to plot in the Management Console. Also note that many values are in ticks – to convert them to seconds, divide by the value in the oracle.rt.counterFrequency counter.

    Read the article

  • Having trouble with projection matrix, need help

    - by Mr.UNOwen
    I'm having trouble with what appears to be the projection matrix. Given a wide enough of a screen, when a cube is on the left and right most edge, the left or right wall will appear stretched to the point that the front face is 1/10 the width of the side. So I do update the screen ratio along with the projection matrix and view port on screen resize, am I safe to assume all the trouble is from the matrix class? Also the cube follows the mouse, but it's only vertically aligned and ahead of the mouse when going left or right from the center of the screen. Perspective function call: * setPerspective * * @param fov: angle in radians * @param aspect: screen ratio w/h * @param near: near distance * @param far: far distance **/ void APCamera::setPerspective(GMFloat_t fov, GMFloat_t aspect, GMFloat_t near, GMFloat_t far) { GMFloat_t difZ = near - far; GMFloat_t *data; mProjection->clear(); //set to identity matrix data = mProjection->getData(); GMFloat_t v = 1.0f / tan(fov / 2.0f); data[_AP_MAA] = v / aspect; data[_AP_MBB] = v; data[_AP_MCC] = (far + near) / difZ; data[_AP_MCD] = -1.0f; data[_AP_MDD] = 0.0f; data[_AP_MDC] = 2.0f * far * near/ difZ; mRatio = aspect; mInvProjOutdated = true; mIsPerspective = true; } and... #define _AP_MAA 0 #define _AP_MAB 1 #define _AP_MAC 2 #define _AP_MAD 3 #define _AP_MBA 4 #define _AP_MBB 5 #define _AP_MBC 6 #define _AP_MBD 7 #define _AP_MCA 8 #define _AP_MCB 9 #define _AP_MCC 10 #define _AP_MCD 11 #define _AP_MDA 12 #define _AP_MDB 13 #define _AP_MDC 14 #define _AP_MDD 15

    Read the article

  • SVN: Checking out a large project over slow connection

    - by far
    Hello, I am new to SVN. I want to check out a very large project over a slow connection which takes ages to download. I have zipped versions of project on both remote server and my local which are identical. Is there an easy and quick way to sync my local project with remote server without a full checkout? Thanks

    Read the article

  • Drupal: How to make a fieldset dependent using CTools

    - by far
    Hello, I am using Ctools Dependency to make a fieldset hideable. This is part of my code: $form['profile-status'] = array( '#type' => 'radios', '#title' => '', '#options' => array( 'new' => t('Create a new profile.'), 'select' => t('Use an existing profile.'), ), ); $form['select'] = array( '#type' => 'select', '#title' => t('Select a profile'), '#options' => $options, '#process' => array('ctools_dependent_process'), '#dependency' => array('radio:profile-status' => array('select')), ); $form['profile-properties'] = array( '#type' => 'fieldset', '#title' => t('View the profile'), '#process' => array('ctools_dependent_process'), '#dependency' => array('radio:profile-status' => array('select')), '#input' => true, ); In snippet above, There are two elements, one select and one fieldset. Both have #process and #dependency parameters and both point to one field for dependent value. Problem is elements like select or textfield can be hidden easily but it does not work for fieldset. In this support request page, CTools creator has mentioned that '#input' = true is a work around. As you see I added it to code, but it does not work as well. Do you have any suggestion?

    Read the article

  • converting a mouse click to a ray

    - by Will
    I have a perspective projection. When the user clicks on the screen, I want to compute the ray between the near and far planes that projects from the mouse point, so I can do some ray intersection code with my world. I am using my own matrix and vector and ray classes and they all work as expected. However, when I try and convert the ray to world coordinates my far always ends up as 0,0,0 and so my ray goes from the mouse click to the centre of the object space, rather than through it. (The x and y coordinates of near and far are identical, they differ only in the z coordinates where they are negatives of each other) GLint vp[4]; glGetIntegerv(GL_VIEWPORT,vp); matrix_t mv, p; glGetFloatv(GL_MODELVIEW_MATRIX,mv.f); glGetFloatv(GL_PROJECTION_MATRIX,p.f); const matrix_t inv = (mv*p).inverse(); const float unit_x = (2.0f*((float)(x-vp[0])/(vp[2]-vp[0])))-1.0f, unit_y = 1.0f-(2.0f*((float)(y-vp[1])/(vp[3]-vp[1]))); const vec_t near(vec_t(unit_x,unit_y,-1)*inv); const vec_t far(vec_t(unit_x,unit_y,1)*inv); ray = ray_t(near,far-near); What have I got wrong? (How do you unproject the mouse-point?)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >