Search Results

Search found 15803 results on 633 pages for 'self join'.

Page 594/633 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • How does MySQL's ORDER BY RAND() work?

    - by Eugene
    Hi, I've been doing some research and testing on how to do fast random selection in MySQL. In the process I've faced some unexpected results and now I am not fully sure I know how ORDER BY RAND() really works. I always thought that when you do ORDER BY RAND() on the table, MySQL adds a new column to the table which is filled with random values, then it sorts data by that column and then e.g. you take the above value which got there randomly. I've done lots of googling and testing and finally found that the query Jay offers in his blog is indeed the fastest solution: SELECT * FROM Table T JOIN (SELECT CEIL(MAX(ID)*RAND()) AS ID FROM Table) AS x ON T.ID >= x.ID LIMIT 1; While common ORDER BY RAND() takes 30-40 seconds on my test table, his query does the work in 0.1 seconds. He explains how this functions in the blog so I'll just skip this and finally move to the odd thing. My table is a common table with a PRIMARY KEY id and other non-indexed stuff like username, age, etc. Here's the thing I am struggling to explain SELECT * FROM table ORDER BY RAND() LIMIT 1; /*30-40 seconds*/ SELECT id FROM table ORDER BY RAND() LIMIT 1; /*0.25 seconds*/ SELECT id, username FROM table ORDER BY RAND() LIMIT 1; /*90 seconds*/ I was sort of expecting to see approximately the same time for all three queries since I am always sorting on a single column. But for some reason this didn't happen. Please let me know if you any ideas about this. I have a project where I need to do fast ORDER BY RAND() and personally I would prefer to use SELECT id FROM table ORDER BY RAND() LIMIT 1; SELECT * FROM table WHERE id=ID_FROM_PREVIOUS_QUERY LIMIT 1; which, yes, is slower than Jay's method, however it is smaller and easier to understand. My queries are rather big ones with several JOINs and with WHERE clause and while Jay's method still works, the query grows really big and complex because I need to use all the JOINs and WHERE in the JOINed (called x in his query) sub request. Thanks for your time!

    Read the article

  • PHP and MySQL problem?

    - by TaG
    When my code is stored and saved and out putted I keep getting these slashes \\\\\ in my output text. how can I get rid of these slashes? Here is the PHP code. if (isset($_POST['submitted'])) { // Handle the form. require_once '../../htmlpurifier/library/HTMLPurifier.auto.php'; $config = HTMLPurifier_Config::createDefault(); $config->set('Core.Encoding', 'UTF-8'); // replace with your encoding $config->set('HTML.Doctype', 'XHTML 1.0 Strict'); // replace with your doctype $purifier = new HTMLPurifier($config); $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"SELECT users.*, profile.* FROM users INNER JOIN contact_info ON contact_info.user_id = users.user_id WHERE users.user_id=3"); $about_me = mysqli_real_escape_string($mysqli, $purifier->purify($_POST['about_me'])); $interests = mysqli_real_escape_string($mysqli, $purifier->purify($_POST['interests'])); if (mysqli_num_rows($dbc) == 0) { $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"INSERT INTO profile (user_id, about_me, interests) VALUES ('$user_id', '$about_me', '$interests')"); } if ($dbc == TRUE) { $dbc = mysqli_query($mysqli,"UPDATE profile SET about_me = '$about_me', interests = '$interests' WHERE user_id = '$user_id'"); echo '<p class="changes-saved">Your changes have been saved!</p>'; } if (!$dbc) { // There was an error...do something about it here... print mysqli_error($mysqli); return; } } Here is the XHTML code. <form method="post" action="index.php"> <fieldset> <ul> <li><label for="about_me">About Me: </label> <textarea rows="8" cols="60" name="about_me" id="about_me"><?php if (isset($_POST['about_me'])) { echo mysqli_real_escape_string($mysqli, $_POST['about_me']); } else if(!empty($about_me)) { echo mysqli_real_escape_string($mysqli, $about_me); } ?></textarea></li> <li><label for="my-interests">My Interests: </label> <textarea rows="8" cols="60" name="interests" id="interests"><?php if (isset($_POST['interests'])) { echo mysqli_real_escape_string($mysqli, $_POST['interests']); } else if(!empty($interests)) { echo mysqli_real_escape_string($mysqli, $interests); } ?></textarea></li> <li><input type="submit" name="submit" value="Save Changes" class="save-button" /> <input type="hidden" name="submitted" value="true" /> <input type="submit" name="submit" value="Preview Changes" class="preview-changes-button" /></li> </ul> </fieldset> </form>

    Read the article

  • Rails Joins and include columns from joins table

    - by seth.vargo
    I don't understand how to get the columns I want from rails. I have two models - A User and a Profile. A User :has_many Profile (because users can revert back to an earlier version of their profile): > DESCRIBE users; +----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | username | varchar(255) | NO | UNI | NULL | | | password | varchar(255) | NO | | NULL | | | last_login | datetime | YES | | NULL | | +----------------+--------------+------+-----+---------+----------------+   > DESCRIBE profiles; +----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | user_id | int(11) | NO | MUL | NULL | | | first_name | varchar(255) | NO | | NULL | | | last_name | varchar(255) | NO | | NULL | | | . . . . . . | | . . . . . . | | . . . . . . | +----------------+--------------+------+-----+---------+----------------+ In SQL, I can run the query: > SELECT * FROM profiles JOIN users ON profiles.user_id = users.id LIMIT 1; +----+-----------+----------+---------------------+---------+---------------+-----+ | id | username | password | last_login | user_id | first_name | ... | +----+-----------+----------+---------------------+---------+---------------+-----+ | 1 | john | ****** | 2010-12-30 18:04:28 | 1 | John | ... | +----+-----------+----------+---------------------+---------+---------------+-----+ See how I get all the columns for BOTH tables JOINED together? However, when I run this same query in Rails, I don't get all the columns I want - I only get those from Profile: # in rails console >> p = Profile.joins(:user).limit(1) >> [#<Profile ...>] >> p.first_name >> NoMethodError: undefined method `first_name' for #<ActiveRecord::Relation:0x102b521d0> from /Library/Ruby/Gems/1.8/gems/activerecord-3.0.1/lib/active_record/relation.rb:373:in `method_missing' from (irb):8 # I do NOT want to do this (AKA I do NOT want to use "includes") >> p.user >> NoMethodError: undefined method `user' for #<ActiveRecord::Relation:0x102b521d0> from /Library/Ruby/Gems/1.8/gems/activerecord-3.0.1/lib/active_record/relation.rb:373:in method_missing' from (irb):9 I want to (efficiently) return an object that has all the properties of Profile and User together. I don't want to :include the user because it doesn't make sense. The user should always be part of the most recent profile as if they were fields within the Profile model. How do I accomplish this?

    Read the article

  • FluentNhibernate many-to-many mapping - resolving record is not inserted

    - by Dmitriy Nagirnyak
    Hi, I have a many-to-many mapping defined (only relevant fields included): // MODEL: public class User : IPersistentObject { public User() { Permissions = new HashedSet<Permission>(); } public virtual int Id { get; protected set; } public virtual ISet<Permission> Permissions { get; set; } } public class Permission : IPersistentObject { public Permission() { } public virtual int Id { get; set; } } // MAPPING: public class UserMap : ClassMap<User> { public UserMap() { Id(x => x.Id); HasManyToMany(x => x.Permissions).Cascade.All().AsSet(); } } public class PermissionMap : ClassMap<Permission> { public PermissionMap() { Id(x => x.Id).GeneratedBy.Assigned(); Map(x => x.Description); } } The following test fails as there is no record inserted into User_Permission table: [Test] public void AddingANewUserPrivilegeShouldSaveIt() { var p1 = new Permission { Id = 123, Description = "p1" }; Session.Save(p1); var u = new User { Email = "[email protected]" }; u.Permissions.Add(p1); Session.Save(u); var userId = u.Id; Session.Evict(u); Session.Get<User>(userId).Permissions.Should().Not.Be.Empty(); } The SQL executed is (SQLite): INSERT INTO "Permission" (Description, Id) VALUES (@p0, @p1);@p0 = 'p1', @p1 = 1 INSERT INTO "User" (Email) VALUES (@p0); select last_insert_rowid();@p0 = '[email protected]' SELECT user0_.Id as Id2_0_, user0_.Email as Email2_0_ FROM "User" user0_ WHERE user0_.Id=@p0;@p0 = 1 SELECT permission0_.UserId as UserId1_, permission0_.PermissionId as Permissi2_1_, permission1_.Id as Id4_0_, permission1_.Description as Descript2_4_0_ FROM User_Permissions permission0_ left outer join "Permission" permission1_ on permission0_.PermissionId=permission1_.Id WHERE permission0_.UserId=@p0;@p0 = 1 We can clearly see that there is no record inserted into the User_Permissions table where it should be. Not sure what I am doing wrong and need an advice. So can you please help me to pass this test. Thanks, Dmitriy.

    Read the article

  • Can I create a custom class that inherits from a strongly typed DataRow?

    - by Calvin Fisher
    I'm working on a huge, old project with a lot of brittle code, some of which has been around since the .NET 1.0 era, and it has been and will be worked on by other people... so I'd like to change as little as possible. I have one project in my solution that contains DataSet.xsd. This project compiles to a separate assembly (Data.dll). The database schema includes several tables arranged more or less hierarchically, but the only way the tables are actually linked together is through joins. I can get, e.g. DepartmentRow and EmployeeRow objects from the autogenerated code. EmployeeRow contains information from the employee's corresponding DepartmentRow through a join. I'm making a new report to view multiple departments and all their employees. If I use the existing data access scheme, all I will be able to get is a spreadsheet-like output where each employee is represented on one line, with department information repeated over and over in its appropriate columns. E.g.: Department1...Employee1... Department1...Employee2... Department2...Employee3... But what the customer would like is to have each department render like a heading, with a list of employees beneath each. E.g.: - Department1... Employee1... Employee2... + Department2... I'm trying to do this by inheriting hierarchical objects from the autogenerated Row objects. E.g.: public class Department : DataSet.DepartmentRow { public List<Employee> Employees; } That way I could nest the data in the report by using a collection of Department objects as the DataSource, each of which will put its list of Employees in a subreport. The problem is that this gives me a The type Data.DataSet.DepartmentRow has no constructors defined error. And when I try to make a constructor, e.g. public class Department : DataSet.DepartmentRow { private Department() { } public List<Employee> Employees; } I get a 'Data.DataSet.DepartmentRow(System.Data.DataRowBuilder)' is inaccessible due to its protection level. error in addition to the first one. Is there a way to accomplish what I'm trying to do? Or is there something else I should be trying entirely?

    Read the article

  • How to order a HasMany collection by a child property with Fluent NHibernate mapping

    - by Geoff Hardy
    I am using Fluent NHibernate to map the following classes: public abstract class DomainObject { public virtual int Id { get; protected internal set; } } public class Attribute { public virtual string Name { get; set; } } public class AttributeRule { public virtual Attribute Attribute { get; set; } public virtual Station Station { get; set; } public virtual RuleTypeId RuleTypeId { get; set; } } public class Station : DomainObject { public virtual IList<AttributeRule> AttributeRules { get; set; } public Station() { AttributeRules = new List<AttributeRule>(); } } My Fluent NHibernate mappings look like this: public class AttributeMap : ClassMap<Attribute> { public AttributeMap() { Id(o => o.Id); Map(o => o.Name); } } public class AttributeRuleMap : ClassMap<AttributeRule> { public AttributeRuleMap() { Id(o => o.Id); Map(o => o.RuleTypeId); References(o => o.Attribute).Fetch.Join(); References(o => o.Station); } } public class StationMap : ClassMap<Station> { public StationMap() { Id(o => o.Id); HasMany(o => o.AttributeRules).Inverse(); } } I would like to order the AttributeRules list on Station by the Attribute.Name property, but doing the following does not work: HasMany(o => o.AttributeRules).Inverse().OrderBy("Attribute.Name"); I have not found a way to do this yet in the mappings. I could create a IQuery or ICriteria to do this for me, but ideally I would just like to have the AttributeRules list sorted when I ask for it. Any advice on how to do this mapping?

    Read the article

  • Can anyone explain me the source code of python "import this"?

    - by byterussian
    If you open a Python interpreter, and type "import this", as you know, it prints: The Zen of Python, by Tim Peters Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than *right* now. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those! In the python source(Lib/this.py) this text is generated by a curios piece of code: s = """Gur Mra bs Clguba, ol Gvz Crgref Ornhgvshy vf orggre guna htyl. Rkcyvpvg vf orggre guna vzcyvpvg. Fvzcyr vf orggre guna pbzcyrk. Pbzcyrk vf orggre guna pbzcyvpngrq. Syng vf orggre guna arfgrq. Fcnefr vf orggre guna qrafr. Ernqnovyvgl pbhagf. Fcrpvny pnfrf nera'g fcrpvny rabhtu gb oernx gur ehyrf. Nygubhtu cenpgvpnyvgl orngf chevgl. Reebef fubhyq arire cnff fvyragyl. Hayrff rkcyvpvgyl fvyraprq. Va gur snpr bs nzovthvgl, ershfr gur grzcgngvba gb thrff. Gurer fubhyq or bar-- naq cersrenoyl bayl bar --boivbhf jnl gb qb vg. Nygubhtu gung jnl znl abg or boivbhf ng svefg hayrff lbh'er Qhgpu. Abj vf orggre guna arire. Nygubhtu arire vf bsgra orggre guna *evtug* abj. Vs gur vzcyrzragngvba vf uneq gb rkcynva, vg'f n onq vqrn. Vs gur vzcyrzragngvba vf rnfl gb rkcynva, vg znl or n tbbq vqrn. Anzrfcnprf ner bar ubaxvat terng vqrn -- yrg'f qb zber bs gubfr!""" d = {} for c in (65, 97): for i in range(26): d[chr(i+c)] = chr((i+13) % 26 + c) print "".join([d.get(c, c) for c in s])

    Read the article

  • Rails / ActiveRecord Modeling Help

    - by JM
    I’m trying to model a relationship in ActiveRecord and I think it’s a little beyond my skill level. Here’s the background. This is a horse racing project and I’m trying to model a horses Connections over time. Connections are defined as the Horse’s Current: Owner, Trainer and Jockey. Over time, a horse’s connections can change for a lot of different reasons: The owner sells the horse in a private sale The horse is claimed (purchase in a public sale) The Trainer switches jockeys The owner switches trainers In my first attempt at modeling this, I created the following tables: Horses, Owners, Trainers, Jockeys and Connections. Essentially, the Connections table was the has-many-through join table and was structured as follows: Connections Table 1 Id Horse_id Owner_id Trainer_id Jockey_id Status_Code Status_Date Change_Code The Horse, Owner, Trainer and Jockey foreign keys are self explanatory. The status code is 1 or 0 (1 active, 0 inactive) and the status date is the date the status changed. Change_code is and integer or string value that represent the reason for the change (private sale, claim, jockey change, etc) The key benefit of this approach is that the Connection is represented as one record in the connections table. The downside is that I have to have a table for Owner (1), Trainer (2) and Jockey (3) when one table could due. In my second attempt at modeling this I created the following tables: Horses, Connections, Entities The Entities tables has the following structure Entities Table id First_name Last_name Role where Role represents if the entity is a Owner, Trainer or Jockey. Under this approach, my Connections table has the following structure Connections Table 2 id Horse_id Entity_id Role Status_Code Status_Date Change_Code 1 1 1 1 1 1/1/2010 2 1 4 2 1 1/1/2010 3 1 10 3 1 1/1/2010 This approach has the benefit of eliminating two tables, but on the other hand the Connection is now comprised of three different records as opposed to one in the first approach. What believe I’m looking for is an approach that allows me to capture the Connection in one record, but also uses an Entities table with roles instead of the Owner, Trainer and Jockey tables. I’m new to ActiveRecord and rails so any and all input would be greatly appreciated. Perhaps there are other ways that would even be better. Thanks!

    Read the article

  • FluentNHibernate Many-To-One References where Foreign Key is not to Primary Key and column names are

    - by Todd Langdon
    I've been sitting here for an hour trying to figure this out... I've got 2 tables (abbreviated): CREATE TABLE TRUST ( TRUSTID NUMBER NOT NULL, ACCTNBR VARCHAR(25) NOT NULL ) CONSTRAINT TRUST_PK PRIMARY KEY (TRUSTID) CREATE TABLE ACCOUNTHISTORY ( ID NUMBER NOT NULL, ACCOUNTNUMBER VARCHAR(25) NOT NULL, TRANSAMT NUMBER(38,2) NOT NULL POSTINGDATE DATE NOT NULL ) CONSTRAINT ACCOUNTHISTORY_PK PRIMARY KEY (ID) I have 2 classes that essentially mirror these: public class Trust { public virtual int Id {get; set;} public virtual string AccountNumber { get; set; } } public class AccountHistory { public virtual int Id { get; set; } public virtual Trust Trust {get; set;} public virtual DateTime PostingDate { get; set; } public virtual decimal IncomeAmount { get; set; } } How do I do the many-to-one mapping in FluentNHibernate to get the AccountHistory to have a Trust? Specifically, since it is related on a different column than the Trust primary key of TRUSTID and the column it is referencing is also named differently (ACCTNBR vs. ACCOUNTNUMBER)???? Here's what I have so far - how do I do the References on the AccountHistoryMap to Trust??? public class TrustMap : ClassMap<Trust> { public TrustMap() { Table("TRUST"); Id(x => x.Id).Column("TRUSTID"); Map(x => x.AccountNumber).Column("ACCTNBR"); } } public class AccountHistoryMap : ClassMap<AccountHistory> { public AccountHistoryMap() { Table("TRUSTACCTGHISTORY"); Id (x=>x.Id).Column("ID"); References<Trust>(x => x.Trust).Column("ACCOUNTNUMBER").ForeignKey("ACCTNBR").Fetch.Join(); Map(x => x.PostingDate).Column("POSTINGDATE"); ); I've tried a few different variations of the above line but can't get anything to work - it pulls back AccountHistory data and a proxy for the Trust; however it says no Trust row with given identifier. This has to be something simple. Anyone? Thanks in advance.

    Read the article

  • PHP & MySQL deleting multiple rows script problem.

    - by oReiLLy
    I'm trying to delete two tables rows from two different tables at once when a user clicks the delete button, but for some reason I cant get the table rows to delete can some one help me figure out what is wrong with my script? Thanks Here is the MySQL tables. CREATE TABLE cases ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, file VARCHAR(255) NOT NULL, case VARCHAR(255) NOT NULL, name VARCHAR(255) NOT NULL, PRIMARY KEY (id) ); CREATE TABLE users_cases ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, cases_id INT UNSIGNED NOT NULL, user_id INT UNSIGNED NOT NULL, PRIMARY KEY (id) ); Here is the PHP & MySQL script. if(isset($_POST['delete_case'])) { $cases_ids = array(); $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"SELECT cases.*, users_cases.* FROM cases INNER JOIN users_cases ON users_cases.cases_id = cases.id WHERE users_cases.user_id='$user_id'"); if (!$dbc) { print mysqli_error($mysqli); } else { while($row = mysqli_fetch_array($dbc)){ $cases_ids[] = $row["cases_id"]; } } foreach($_POST['delete_id'] as $di) { if(in_array($di, $cases_ids)) { $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"DELETE FROM users_cases WHERE cases_id = '$delete_id'"); $dbc2 = mysqli_query($mysqli,"DELETE FROM cases WHERE id = '$delete_id'"); } } } Here is the XHTML. <li> <input type="text" name="file[]" size="25" /> <input type="text" name="case[]" size="25" /> <input type="text" name="name[]" size="25" /> <input type="hidden" name="delete_id" value="' . $row['cases_id'] . '" /> </li> <li> <input type="text" name="file[]" size="25" /> <input type="text" name="case[]" size="25" /> <input type="text" name="name[]" size="25" /> <input type="hidden" name="delete_id" value="' . $row['cases_id'] . '" /> </li> <li> <input type="text" name="file[]" size="25" /> <input type="text" name="case[]" size="25" /> <input type="text" name="name[]" size="25" /> <input type="hidden" name="delete_id" value="' . $row['cases_id'] . '" /> </li>

    Read the article

  • Grouping geographical shapes

    - by grenade
    I am using Dundas Maps and attempting to draw a map of the world where countries are grouped into regions that are specific to a business implementation. I have shape data (points and segments) for each country in the world. I can combine countries into regions by adding all points and segments for countries within a region to a new region shape. foreach(var region in GetAllRegions()){ var regionShape = new Shape { Name = region.Name }; foreach(var country in GetCountriesInRegion(region.Id)){ var countryShape = GetCountryShape(country.Id); regionShape.AddSegments(countryShape.ShapeData.Points, countryShape.ShapeData.Segments); } map.Shapes.Add(regionShape); } The problem is that the country border lines still show up within a region and I want to remove them so that only regional borders show up. Dundas polygons must start and end at the same point. This is the case for all the country shapes. Now I need an algorithm that can: Determine where country borders intersect at a regional border, so that I can join the regional border segments. Determine which country borders are not regional borders so that I can discard them. Sort the resulting regional points so that they sequentialy describe the shape boundaries. Below is where I have gotten to so far with the map. You can see that the country borders still need to be removed. For example, the border between Mongolia and China should be discarded whereas the border between Mongolia and Russia should be retained. The reason I need to retain a regional border is that the region colors will be significant in conveying information but adjacent regions may be the same color. The regions can change to include or exclude countries and this is why the regional shaping must be dynamic. EDIT: I now know that I what I am looking for is a UNION of polygons. David Lean explains how to do it using the spatial functions in SQL Server 2008 which might be an option but my efforts have come to a halt because the resulting polygon union is so complex that SQL truncates it at 43,680 characters. I'm now trying to either find a workaround for that or find a way of doing the union in code.

    Read the article

  • itextsharp PdfCopy and landscape pages

    - by Andreas Rehm
    I'm using itextsharp to join mutiple pdf documents and add a footer. My code works fine - except for landscape pages - it isn't detecting the page rotation - the footer is not centerd for landscape: public static int AddPagesFromStream(Document document, PdfCopy pdfCopy, Stream m, bool addFooter, int detailPages, string footer, int footerPageNumOffset, int numPages, string pageLangString, string printLangString) { CreateFont(); try { m.Seek(0, SeekOrigin.Begin); var reader = new PdfReader(m); // get page count var pdfPages = reader.NumberOfPages; var i = 0; // add pages while (i < pdfPages) { i++; // import page with pdfcopy var page = pdfCopy.GetImportedPage(reader, i); // get page center float posX; float posY; var rotation = page.BoundingBox.Rotation; if (rotation == 0 || rotation == 180) { posX = page.Width / 2; posY = 0; } else { posX = page.Height / 2; posY = 20f; } var ps = pdfCopy.CreatePageStamp(page); var cb = ps.GetOverContent(); // add footer cb.SetColorFill(BaseColor.WHITE); var gs1 = new PdfGState {FillOpacity = 0.8f}; cb.SetGState(gs1); cb.Rectangle(0, 0, document.PageSize.Width, 46f + posY); cb.Fill(); // Text cb.SetColorFill(BaseColor.BLACK); cb.SetFontAndSize(baseFont, 7); cb.BeginText(); // create text var pages = string.Format(pageLangString, i + footerPageNumOffset, numPages); cb.ShowTextAligned(PdfContentByte.ALIGN_CENTER, printLangString, posX, 40f + posY, 0f); cb.ShowTextAligned(PdfContentByte.ALIGN_CENTER, footer, posX, 28f + posY, 0f); cb.ShowTextAligned(PdfContentByte.ALIGN_CENTER, pages, posX, 20f + posY, 0f); cb.EndText(); ps.AlterContents(); // add page to new pdf pdfCopy.AddPage(page); } // close PdfReader reader.Close(); // return number of pages return i; } catch (Exception e) { Console.WriteLine(e); return 0; } } How do I detect the page rotation (e.g. landscape) format in this case? The given example works for PdfReader but not for PdfCopy. Edit: Why do I need PdfCopy? I tried copying a word pdf export. Some word hyperlinks will not work when you try to copy pages with PdfReader. Only PdfCopy transfers all needed page informations. Edit: (SOLVED) You need to use reader.GetPageRotation(i);

    Read the article

  • XML validation in Java - why does this fail?

    - by jd
    hi, first time dealing with xml, so please be patient. the code below is probably evil in a million ways (I'd be very happy to hear about all of them), but the main problem is of course that it doesn't work :-) public class Test { private static final String JSDL_SCHEMA_URL = "http://schemas.ggf.org/jsdl/2005/11/jsdl"; private static final String JSDL_POSIX_APPLICATION_SCHEMA_URL = "http://schemas.ggf.org/jsdl/2005/11/jsdl-posix"; public static void main(String[] args) { System.out.println(Test.createJSDLDescription("/bin/echo", "hello world")); } private static String createJSDLDescription(String execName, String args) { Document jsdlJobDefinitionDocument = getJSDLJobDefinitionDocument(); String xmlString = null; // create the elements Element jobDescription = jsdlJobDefinitionDocument.createElement("JobDescription"); Element application = jsdlJobDefinitionDocument.createElement("Application"); Element posixApplication = jsdlJobDefinitionDocument.createElementNS(JSDL_POSIX_APPLICATION_SCHEMA_URL, "POSIXApplication"); Element executable = jsdlJobDefinitionDocument.createElement("Executable"); executable.setTextContent(execName); Element argument = jsdlJobDefinitionDocument.createElement("Argument"); argument.setTextContent(args); //join them into a tree posixApplication.appendChild(executable); posixApplication.appendChild(argument); application.appendChild(posixApplication); jobDescription.appendChild(application); jsdlJobDefinitionDocument.getDocumentElement().appendChild(jobDescription); DOMSource source = new DOMSource(jsdlJobDefinitionDocument); validateXML(source); try { Transformer transformer = TransformerFactory.newInstance().newTransformer(); transformer.setOutputProperty(OutputKeys.INDENT, "yes"); StreamResult result = new StreamResult(new StringWriter()); transformer.transform(source, result); xmlString = result.getWriter().toString(); } catch (Exception e) { e.printStackTrace(); } return xmlString; } private static Document getJSDLJobDefinitionDocument() { DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); DocumentBuilder builder = null; try { builder = factory.newDocumentBuilder(); } catch (Exception e) { e.printStackTrace(); } DOMImplementation domImpl = builder.getDOMImplementation(); Document theDocument = domImpl.createDocument(JSDL_SCHEMA_URL, "JobDefinition", null); return theDocument; } private static void validateXML(DOMSource source) { try { URL schemaFile = new URL(JSDL_SCHEMA_URL); Sche maFactory schemaFactory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI); Schema schema = schemaFactory.newSchema(schemaFile); Validator validator = schema.newValidator(); DOMResult result = new DOMResult(); validator.validate(source, result); System.out.println("is valid"); } catch (Exception e) { e.printStackTrace(); } } } it spits out a somewhat odd message: org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid content was found starting with element 'JobDescription'. One of '{"http://schemas.ggf.org/jsdl/2005/11/jsdl":JobDescription}' is expected. Where am I going wrong here? Thanks a lot

    Read the article

  • Dynamic data-entry value store

    - by simendsjo
    I'm creating a data-entry application where users are allowed to create the entry schema. My first version of this just created a single table per entry schema with each entry spanning a single or multiple columns (for complex types) with the appropriate data type. This allowed for "fast" querying (on small datasets as I didn't index all columns) and simple synchronization where the data-entry was distributed on several databases. I'm not quite happy with this solution though; the only positive thing is the simplicity... I can only store a fixed number of columns. I need to create indexes on all columns. I need to recreate the table on schema changes. Some of my key design criterias are: Very fast querying (Using a simple domain specific query language) Writes doesn't have to be fast Many concurrent users Schemas will change often Schemas might contain many thousand columns The data-entries might be distributed and needs syncronization. Preferable MySQL and SQLite - Databases like DB2 and Oracle is out of the question. Using .Net/Mono I've been thinking of a couple of possible designs, but none of them seems like a good choice. Solution 1: Union like table containing a Type column and one nullable column per type. This avoids joins, but will definitly use a lot of space. Solution 2: Key/value store. All values are stored as string and converted when needed. Also use a lot of space, and of course, I hate having to convert everything to string. Solution 3: Use an xml database or store values as xml. Without any experience I would think this is quite slow (at least for the relational model unless there is some very good xpath support). I also would like to avoid an xml database as other parts of the application fits better as a relational model, and being able to join the data is helpful. I cannot help to think that someone has solved (some of) this already, but I'm unable to find anything. Not quite sure what to search for either... I know market research is doing something like this for their questionnaires, but there are few open source implementations, and the ones I've found doesn't quite fit the bill. PSPP has much of the logic I'm thinking of; primitive column types, many columns, many rows, fast querying and merging. Too bad it doesn't work against a database.. And of course... I don't need 99% of the provided functionality, but a lot of stuff not included. I'm not sure this is the right place to ask such a design related question, but I hope someone here has some tips, know of any existing work, or can point me to a better place to ask such a question. Thanks in advance!

    Read the article

  • Hibernate can't load Custom SQL collection

    - by Geln Yang
    Hi, There is a table Item like, code,name 01,parent1 02,parent2 0101,child11 0102,child12 0201,child21 0202,child22 Create a java object and hbm xml to map the table.The Item.parent is a Item whose code is equal to the first two characters of its code : class Item{ String code; String name; Item parent; List<Item> children; .... setter/getter.... } <hibernate-mapping> <class name="Item" table="Item"> <id name="code" length="4" type="string"> <generator class="assigned" /> </id> <property name="name" column="name" length="50" not-null="true" /> <many-to-one name="parent" class="Item" not-found="ignore"> <formula> <![CDATA[ (select i.code,r.name from Item i where (case length(code) when 4 then i.code=SUBSTRING(code,1,2) else false end)) ]]> </formula> </many-to-one> <bag name="children"></bag> </class> </hibernate-mapping> I try to use formula to define the many-to-one relationship,but it doesn't work!Is there something wrong?Or is there other method? Thanks! ps,I use mysql database. add 2010/05/23 Pascal's answer is right,but the "false" value must be replaced with other expression,like "1=2".Because the "false" value would be considered to be a column of the table. select i.code from Item i where ( case length(code) when 4 then i.code=SUBSTRING(code,1,2) else 1=2 end) And I have another question about the children "bag" mapping.There isn't formula configuration option for "bag",but we can use "loader" to load a sql-query.I configure the "bag" as following.But it get a list whose size is 0.What's wrong with it? <class> ... ... <bag name="children"> <key /> <one-to-many class="Item"></one-to-many> <loader query-ref="getChildren"></loader> </bag> </class> <sql-query name="getChildren"> <load-collection alias="r" role="Item.children" /> <![CDATA[(select {r.*} from Item r join Item o where o.code=:code and ( case length(o.code) when 2 then (length(r.code)=4 and SUBSTRING(r.code,1,2)=o.code) else 1=2 end ))]]> </sql-query>

    Read the article

  • Repeating fields in similar database tables

    - by user1738833
    I have been tasked with working on a database that I have never seen before and I'm looking at the DB structure. Some of the central and most heavily queried and joined tables look like virtual duplicates of each other. Here's a massively simplified representation of the situation, with business-sensitive information changed, listing hypothetical table names and fields: TopLevelGroup: PK_TLGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK SubGroup: PK_SubGroupId, FK_ParentTopLevelGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK SubSubGroup: PK_SubSUbGroupId, FK_ParentSubGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK I haven't listed the types of the fields as I don't think it's particularly important to the situation. In addition, it's worth saying that rather than four repeated fields as in the example above, I'm looking at 86 repeated fields. For the most part, those fields genuinely do represent "facts" about the primary table entity, so it's not automatically wrong for that reason. In addition, the "groups" represented here have a property inheritance relationship. If DisplaysXOnBill is NULL in the SubSubGroup, it takes the value of DisplaysXOnBillfrom it's parent, the SubGroup, and so-on up to the TopLevelGroup. Further, the requirements will never require that the model extends beyond three levels, so there is no need for flexibility in that area. Is there a design smell from several tables which describe very similar entities having almost identical fields? If so, what might be a better design of the example above? I'm using the phrase "design smell" to indicate a possible problem. Of course, in any given situation, a particular design might well be the best solution. I'm looking for a more general answer - wondering what might be wrong with this design and what might be the better design were that the case. Possibly related, but not primary questions: Is this database schema in a reasonably normal form (e.g. to 3NF), insofar as can be told from the information I've provided. I can't see a problem with the requirements of 2NF and 3NF, except in their inheriting the requirements of 1NF. Is 1NF satisfied though? Are repeating groups allowed in different tables? Is there a best-practice method for implementing the inheritance relationship in a database as I require? The method above feels clunky to me because any query on the SubSubGroup necessarily needs to join onto the SubGroup and the TopLevelGroup tables to collect inherited facts, which can make even trivial joins requiring facts from the SubSubGroup table rather long-winded. There are, of course, political considerations to making a relatively large change like this. For the purpose of this question, I'm happy to ignore that fact in the interests of keeping the answers ring-fenced to the technical problem.

    Read the article

  • Expression Too Complex In Access 2007

    - by Jazzepi
    When I try to run this query in Access through the ODBC interface into a MySQL database I get an "Expression too complex in query expression" error. The essential thing I'm trying to do is translate abbreviated names of languages into their full body English counterparts. I was curious if there was some way to "trick" access into thinking the expression is smaller with sub queries, or if someone else had a better idea of how to solve this problem. I thought about making a temporary table and doing a join on it, but that's not supported in Access SQL. Just as an FYI, the query worked fine until I added the big long IFF chain. I tested the query on a smaller IFF chain for three languages, and that wasn't an issue, so the problem definitely stems from the huge IFF chain (It's 26 deep). Also, I might be able to drop some of the options (like combining the different forms of Chinese or Portuguese) As a test, I was able to get the SQL query to work after paring it down to 14 IFF() statements, but that's a far cry from the 26 languages I'd like to represent. SELECT TOP 5 Count( * ) AS [Number of visits by language], IIf(login.lang="ar","Arabic",IIf(login.lang="bg","Bulgarian",IIf(login.lang="zh_CN","Chinese (Simplified Han)",IIf(login.lang="zh_TW","Chinese (Traditional Han)",IIf(login.lang="cs","Czech",IIf(login.lang="da","Danish",IIf(login.lang="de","German",IIf(login.lang="en_US","United States English",IIf(login.lang="en_GB","British English",IIf(login.lang="es","Spanish",IIf(login.lang="fr","French",IIf(login.lang="el","Greek",IIf(login.lang="it","Italian",IIf(login.lang="ko","Korean",IIf(login.lang="hu","Hungarian",IIf(login.lang="nl","Dutch",IIf(login.lang="pl","Polish",IIf(login.lang="pt_PT","European Portuguese",IIf(login.lang="pt_BR","Brazilian Portuguese",IIf(login.lang="ru","Russian",IIf(login.lang="sk","Slovak",IIf(login.lang="sl","Slovenian","IIf(login.lang="fi","Finnish",IIf(login.lang="sv","Swedish",IIf(login.lang="tr","Turkish","Unknown")))))))))))))))))))))))))) AS [Language] FROM login, reservations, reservation_users, schedules WHERE (reservations.start_date Between DATEDIFF('s','1970-01-01 00:00:00',[Starting Date in the Following Format YYYY/MM/DD]) And DATEDIFF('s','1970-01-01 00:00:00',[Ending Date in the Following Format YYYY/MM/DD])) And reservations.is_blackout=0 And reservation_users.memberid=login.memberid And reservation_users.resid=reservations.resid And reservation_users.invited=0 And reservations.scheduleid=schedules.scheduleid And scheduletitle=[Schedule Title] GROUP BY login.lang ORDER BY Count( * ) DESC; @ Michael Todd I completely agree. The list of languages should have been a table in the database and the login.lang should have been a FK into that table. Unfortunately this isn't how the database was written, and it's not really mine to modify. The languages are placed into the login.lang field by the PHP running on top of the database.

    Read the article

  • dynamiclly schedule a lead sales agent

    - by Josh
    I have a website that I'm trying to migrate from classic asp to asp.net. It had a lead schedule, where each sales agent would be featured for the current day, or part of the day.The next day a new agent would be scheduled. It was driven off a database table that had a row for each day in it. So to figure out if a sales agent would show on a day, it was easy, just find today's date in the table. Problem was it ran out rows, and you had to run a script to update the lead days 6 months at a time. Plus if there was ever any change to the schedule, you had to delete all the rows and re-run the script. So I'm trying to code it where sql server figures that out for me, and no script has to be ran. I have a table like so CREATE TABLE [dbo].[LeadSchedule]( [leadid] [int] IDENTITY(1,1) NOT NULL, [userid] [int] NOT NULL, [sunday] [bit] NOT NULL, [monday] [bit] NOT NULL, [tuesday] [bit] NOT NULL, [wednesday] [bit] NOT NULL, [thursday] [bit] NOT NULL, [friday] [bit] NOT NULL, [saturday] [bit] NOT NULL, [StartDate] [smalldatetime] NULL, [EndDate] [smalldatetime] NULL, [StartTime] [time](0) NULL, [EndTime] [time](0) NULL, [order] [int] NULL, So the user can schedule a sales agent depending on their work schedule. Also if they wanted to they could split certain days, or sales agents by time, So from Midnight to 4 it was one agent, from 4-midnight it was another. So far I've tried using a numbers table, row numbers, goofy date math, and I'm at a loss. Any suggestions on how to handle this purely from sql code? If it helps, the table should always be small, like less than 20 never over 100. update After a few hours all I've managed to come up with is the below. It doesn't handle filling in days not available or times, just rotates through all the sales agents with leadTable as ( select leadid,userid,[order],StartDate, case DATEPART(dw,getdate()) when 1 then sunday when 2 then monday when 3 then tuesday when 4 then wednesday when 5 then thursday when 6 then friday when 7 then saturday end as DayAvailable , ROW_NUMBER() OVER (ORDER BY [order] ASC) AS ROWID from LeadSchedule where GETDATE()>=StartDate and (CONVERT(time(0),GETDATE())>= StartTime or StartTime is null) and (CONVERT(time(0),GETDATE())<= EndTime or EndTime is null) ) select userid, DATEADD(d,(number+ROWID-2)*totalUsers,startdate ) leadday from (select *, (select COUNT(1) from leadTable) totalUsers from leadTable inner join Numbers on 1=1 where DayAvailable =1 ) tb1 order by leadday asc

    Read the article

  • How do you use technology to memorize set of terms?

    - by user49767
    Always there are few set of items needs to be memorized in short span of time. Here are my following cases. 1) My Job requires some set of items needs to be memorized. 2) I am a developer who has to learn 150+ tags within next 3 days. 3) Fix developer/support has to remember minimum of 125+ tags (set of possible values). 4) It is better if team's SQL developer knows all the table and columns in my database. 5) When guys join new department or job. Memorizing few related items will definitely gives some benefit. Most of the cases, I suggest people to understand the domain better and nothing wrong in using google (but remember correct search-word). But recently I came across a junior developer who took lot of effort in memorizing set of things (150+ table structures, fix protocol tags, almost 300+ configuration items from property file) and was very very successful in his job and was swift in responding for support queries. Needless to say he is smart worker too (not a dumb guy). When I try to recollect some of the successful employees I met, they were so good in remembering entire schema and they did in short span of time. But I don't argue that memorizing alone gives success, but it greatly helps when situation demands. Here my question is, I am not good at remembering things, but it shouldn't be lame excuse. Hence I am evaluating using technolgies better to memorize set of items. Not very much interested in memory techniques (mnemoninc, photography memory, etc..). Even I have recorded 100+ items and listen to that whenever I found free time, defintely there were some fruitful result. Now I need your suggestion about what are all the ways to exploit technology to memorize. There could be so many reason why guys remember a subject (passionate, essential, author, creator, responsbile). Not interested in dissecting why guys remeber. Rather much interested in using ways, and techniques (cheat sheet...) to remember a set of itmes. Note : I appreciate, encourage people who could rephrase my question better. Note : I have kept couple of cheat-sheet close to my monitor, honestly it did not help me :).

    Read the article

  • Where i am doing wrong in this?

    - by kumar
    <script type="text/javascript"> $(document).ready(function() { $('#PbtnSubmit').attr('disabled', 'disabled'); $('#PbtnCancel').attr('disabled', 'disabled'); $('#PbtnSelectAll').click(function() { $('#PricingEditExceptions input[type=checkbox]').attr('checked', 'checked'); $('#PbtnSubmit').removeAttr('disabled'); $('#PbtnCancel').removeAttr('disabled'); $('fieldset').find("input,select,textarea").removeAttr('disabled'); }); $('#PbtnCancel').click(function() { $('#PricingEditExceptions input[name=PMchk]').attr('checked', false); $('#PbtnSubmit').attr('disabled', 'disabled'); $('#PbtnCancel').attr('disabled', 'disabled'); $('fieldset').find("input:not(:checkbox),select,textarea").attr('disabled', 'disabled'); $('#genericfieldset').find("input,select,textarea").removeAttr('disabled'); }); $('#PbtnSubmit').click(function() { $('#PricingEditExceptions input[name=PMchk]').each(function() { if ($("#PricingEditExceptions input:checkbox:checked").length > 0) { var checked = $('#PricingEditExceptions input[type=checkbox]:checked'); var PMstrIDs = checked.map(function() { return $(this).val(); }).get().join(','); $('#1_exceptiontypes').attr('value', exceptiontypes) $('#1_PMstrIDs').attr('value', PMstrIDs); } else { alert("Please select atleast one exception"); return false; } }); }); function validate_excpt(formData, jqForm, options) { var form = jqForm[0]; } // post-submit callback function showResponse(responseText, statusText, xhr, $form) { if (responseText[0].substring(0, 16) != "System.Exception") { $('#error-msg-ID span:last').html('<strong>Update successful.</strong>'); } else { $('#error-msg-ID span:last').html('<strong>Update failed.</strong> ' + responseText[0].substring(0, 48)); } $('#error-msg-ID').removeClass('hide'); } $('#exc-').ajaxForm({ target: '#error-msg-ID', beforeSubmit: validate_excpt, success: showResponse, dataType: 'json' }); $('.button').button(); }); </script> This is my button.. at very first when page load I am trying Make Disabled PbtnSubmit and PbtnCancel and when we click PbtnSelectAll I am enabling back again.. but its not doing with this code. is that something I am doing worng in this? thanks

    Read the article

  • delete row from selected gridview and database

    - by user175084
    i am trying to delete a row from the gridview and database... It should be deleted if a delte linkbutton is clicked in the gridview.. I am gettin the row index as follows: protected void LinkButton1_Click(object sender, EventArgs e) { LinkButton btn = (LinkButton)sender; GridViewRow row = (GridViewRow)btn.NamingContainer; if (row != null) { LinkButton LinkButton1 = (LinkButton)sender; // Get reference to the row that hold the button GridViewRow gvr = (GridViewRow)LinkButton1.NamingContainer; // Get row index from the row int rowIndex = gvr.RowIndex; string str = rowIndex.ToString(); //string str = GridView1.DataKeys[row.RowIndex].Value.ToString(); RemoveData(str); //call the delete method } } now i want to delete it... so i am having problems with this code.. i get an error Must declare the scalar variable "@original_MachineGroupName"... any suggestions private void RemoveData(string item) { SqlConnection conn = new SqlConnection(@"Data Source=JAGMIT-PC\SQLEXPRESS; Initial Catalog=SumooHAgentDB;Integrated Security=True"); string sql = "DELETE FROM [MachineGroups] WHERE [MachineGroupID] = @original_MachineGroupID; SqlCommand cmd = new SqlCommand(sql, conn); cmd.Parameters.AddWithValue("@original_MachineGroupID", item); conn.Open(); cmd.ExecuteNonQuery(); conn.Close(); } Blockquote <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:SumooHAgentDBConnectionString %>" SelectCommand="SELECT MachineGroups.MachineGroupID, MachineGroups.MachineGroupName, MachineGroups.MachineGroupDesc, MachineGroups.TimeAdded, MachineGroups.CanBeDeleted, COUNT(Machines.MachineName) AS Expr1, DATENAME(month, (MachineGroups.TimeAdded - 599266080000000000) / 864000000000) + SPACE(1) + DATENAME(d, (MachineGroups.TimeAdded - 599266080000000000) / 864000000000) + ', ' + DATENAME(year, (MachineGroups.TimeAdded - 599266080000000000) / 864000000000) AS Expr2 FROM MachineGroups FULL OUTER JOIN Machines ON Machines.MachineGroupID = MachineGroups.MachineGroupID GROUP BY MachineGroups.MachineGroupID, MachineGroups.MachineGroupName, MachineGroups.MachineGroupDesc, MachineGroups.TimeAdded, MachineGroups.CanBeDeleted" DeleteCommand="DELETE FROM [MachineGroups] WHERE [MachineGroupID] =@original_MachineGroupID" > <DeleteParameters> <asp:Parameter Name="@original_MachineGroupID" Type="Int16" /> <asp:Parameter Name="@original_MachineGroupName" Type="String" /> <asp:Parameter Name="@original_MachineGroupDesc" Type="String" /> <asp:Parameter Name="@original_CanBeDeleted" Type="Boolean" /> <asp:Parameter Name="@original_TimeAdded" Type="Int64" /> </DeleteParameters> </asp:SqlDataSource> I still get an error : Must declare the scalar variable "@original_MachineGroupID"

    Read the article

  • jQuery: resizing element cuts off parent's background

    - by Justine
    Hi, I've been trying to recreate an effect from this tutorial: http://jqueryfordesigners.com/jquery-look-tim-van-damme/ Unfortunately, I want a background image underneath and because of the resize going on in JavaScript, it gets resized and cut off as well, like so: http://dev.gentlecode.net/dotme/index-sample.html - you can view source there to check the HTML, but basic structure looks like this: <div class="page"> <div class="container"> div.header ul.nav div.main </div> </div> Here is my jQuery code: $('ul.nav').each(function() { var $links = $(this).find('a'), panelIds = $links.map(function() { return this.hash; }).get().join(","), $panels = $(panelIds), $panelWrapper = $panels.filter(':first').parent(), delay = 500; $panels.hide(); $links.click(function() { var $link = $(this), link = (this); if ($link.is('.current')) { return; } $links.removeClass('current'); $link.addClass('current'); $panels.animate({ opacity : 0 }, delay); $panelWrapper.animate({ height: 0 }, delay, function() { var height = $panels.hide().filter(link.hash).show().css('opacity', 1).outerHeight(); $panelWrapper.animate({ height: height }, delay); }); }); var showtab = window.location.hash ? '[hash=' + window.location.hash + ']' : ':first'; $links.filter(showtab).click(); }); In this example, panelWrapper is a div.main and it gets resized to fit the content of tabs. The background is applied to the div.page but because its child is getting resized, it resizes as well, cutting off the background image. It's hard to explain so please look at the link above to see what I mean. I guess what I'm trying to ask is: is there a way to resize an element without resizing its parent? I tried setting height and min-height of .page to 100% and 101% but that didn't work. I tried making the background image fixed, but nada. It also happens if I add the background to the body or even html. Help?

    Read the article

  • Need help converting Ruby code to php code

    - by newprog
    Yesterday I posted this queston. Today I found the code which I need but written in Ruby. Some parts of code I have understood (I don't know Ruby) but there is one part that I can't. I think people who know ruby and php can help me understand this code. def do_create(image) # Clear any old info in case of a re-submit FIELDS_TO_CLEAR.each { |field| image.send(field+'=', nil) } image.save # Compose request vm_params = Hash.new # Submitting a file in ruby requires opening it and then reading the contents into the post body file = File.open(image.filename_in, "rb") # Populate the parameters and compute the signature # Normally you would do this in a subroutine - for maximum clarity all # parameters are explicitly spelled out here. vm_params["image"] = file # Contents will be read by the multipart object created below vm_params["image_checksum"] = image.image_checksum vm_params["start_job"] = 'vectorize' vm_params["image_type"] = image.image_type if image.image_type != 'none' vm_params["image_complexity"] = image.image_complexity if image.image_complexity != 'none' vm_params["image_num_colors"] = image.image_num_colors if image.image_num_colors != '' vm_params["image_colors"] = image.image_colors if image.image_colors != '' vm_params["expire_at"] = image.expire_at if image.expire_at != '' vm_params["licensee_id"] = DEVELOPER_ID #in php it's like this $vm_params["sequence_number"] = -rand(100000000);????? vm_params["sequence_number"] = Kernel.rand(1000000000) # Use a negative value to force an error when calling the test server vm_params["timestamp"] = Time.new.utc.httpdate string_to_sign = CREATE_URL + # Start out with the URL being called... #vm_params["image"].to_s + # ... don't include the file per se - use the checksum instead vm_params["image_checksum"].to_s + # ... then include all regular parameters vm_params["start_job"].to_s + vm_params["image_type"].to_s + vm_params["image_complexity"].to_s + # (nil.to_s => '', so this is fine for vm_params we don't use) vm_params["image_num_colors"].to_s + vm_params["image_colors"].to_s + vm_params["expire_at"].to_s + vm_params["licensee_id"].to_s + # ... then do all the security parameters vm_params["sequence_number"].to_s + vm_params["timestamp"].to_s vm_params["signature"] = sign(string_to_sign) #no problem # Workaround class for handling multipart posts mp = Multipart::MultipartPost.new query, headers = mp.prepare_query(vm_params) # Handles the file parameter in a special way (see /lib/multipart.rb) file.close # mp has read the contents, we can close the file now response = post_form(URI.parse(CREATE_URL), query, headers) logger.info(response.body) response_hash = ActiveSupport::JSON.decode(response.body) # Decode the JSON response string ##I have understood below def sign(string_to_sign) #logger.info("String to sign: '#{string_to_sign}'") Base64.encode64(HMAC::SHA1.digest(DEVELOPER_KEY, string_to_sign)) end # Within Multipart modul I have this: class MultipartPost BOUNDARY = 'tarsiers-rule0000' HEADER = {"Content-type" => "multipart/form-data, boundary=" + BOUNDARY + " "} def prepare_query (params) fp = [] params.each {|k,v| if v.respond_to?(:read) fp.push(FileParam.new(k, v.path, v.read)) else fp.push(Param.new(k,v)) end } query = fp.collect {|p| "--" + BOUNDARY + "\r\n" + p.to_multipart }.join("") + "--" + BOUNDARY + "--" return query, HEADER end end end Thanks for your help.

    Read the article

  • Google Map key in Android?

    - by Amandeep singh
    I am developing an android app in which i have to show map view i have done it once in a previous app but the key i used in the previous is not working int his app . It is just showing a pin in the application with blank screen. Do i have to use a different Map key for each project , If not Kindly help me how can i use my previous Key in this. and also I tried generating a new key but gave the the same key back . Here is the code i used public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.map); btn=(Button)findViewById(R.id.mapbtn); str1=getIntent().getStringExtra("LATITUDE"); str2=getIntent().getStringExtra("LONGITUDE"); mapView = (MapView)findViewById(R.id.mapView1); //View zoomView = mapView.getZoomControls(); mapView.setBuiltInZoomControls(true); //mapView.setSatellite(true); mc = mapView.getController(); btn.setOnClickListener(this); MapOverlay mapOverlay = new MapOverlay(); List<Overlay> listOfOverlays = mapView.getOverlays(); listOfOverlays.clear(); listOfOverlays.add(mapOverlay); String coordinates[] = {str1, str2}; double lat = Double.parseDouble(coordinates[0]); double lng = Double.parseDouble(coordinates[1]); p = new GeoPoint( (int) (lat * 1E6), (int) (lng * 1E6)); mc.animateTo(p); mc.setZoom(17); mapView.invalidate(); //mp.equals(o); } @Override protected boolean isRouteDisplayed() { // TODO Auto-generated method stub return false; } class MapOverlay extends com.google.android.maps.Overlay { @Override public boolean draw(Canvas canvas, MapView mapView, boolean shadow, long when) { super.draw(canvas, mapView, shadow); Paint mPaint = new Paint(); mPaint.setDither(true); mPaint.setColor(Color.RED); mPaint.setStyle(Paint.Style.FILL_AND_STROKE); mPaint.setStrokeJoin(Paint.Join.ROUND); mPaint.setStrokeCap(Paint.Cap.ROUND); mPaint.setStrokeWidth(2); //---translate the GeoPoint to screen pixels--- Point screenPts = new Point(); mapView.getProjection().toPixels(p, screenPts); //---add the marker--- Bitmap bmp = BitmapFactory.decodeResource(getResources(), R.drawable.pin); canvas.drawBitmap(bmp, screenPts.x, screenPts.y-50, null); return true; } Thanks....

    Read the article

  • MVC Entity Framework Model not returning correct data

    - by quagland
    Hi, Run into a strange problem while writing an ASP.NET MVC site. I have a view in my SQL Server database that returns a few date ranges. The view works fine when running the query in SSMS. When the view data is returned by the Entity Framework Model, It returns the correct number of rows but some of the rows are duplicated. Here is an example of what I have done: SQL Server code: CREATE TABLE [dbo].[A]( [ID] [int] NOT NULL, [PhID] [int] NULL, [FromDate] [datetime] NULL, [ToDate] [datetime] NULL, CONSTRAINT [PK_A] PRIMARY KEY CLUSTERED ([ID] ASC)) ON [PRIMARY] go CREATE TABLE [dbo].[B]( [PhID] [int] NOT NULL, [FromDate] [datetime] NULL, [ToDate] [datetime] NULL, CONSTRAINT [PK_B] PRIMARY KEY CLUSTERED ( [PhID] ASC )) ON [PRIMARY] go CREATE VIEW C as SELECT A.ID, CASE WHEN A.PhID IS NULL THEN A.FromDate ELSE B.FromDate END AS FromDate, CASE WHEN A.PhID IS NULL THEN A.ToDate ELSE B.ToDate END AS ToDate FROM A LEFT OUTER JOIN B ON A.PhID = B.PhID go INSERT INTO B (PhID, FromDate, ToDate) VALUES (100, '20100615', '20100715') INSERT INTO A (ID, PhID, FromDate, ToDate) VALUES (1, NULL, '20100101', '20100201') INSERT INTO A (ID, PhID, FromDate, ToDate) VALUES (1, 100, '20100615', '20100715') INSERT INTO B (PhID, FromDate, ToDate) VALUES (101, '20101201', '20101231') INSERT INTO A (ID, PhID, FromDate, ToDate) VALUES (2, NULL, '20100801', '20100901') INSERT INTO A (ID, PhID, FromDate, ToDate) VALUES (2, 101, '20101201', '20101231') So now, if you select all from C, you get 4 separate date ranges In the Entity Framework Model (which I call 'Core'), the view 'C' is added. in MVC Controller: public class HomeController : Controller { public ActionResult Index() { CoreEntities db = new CoreEntities(); var clist = from c in db.C select c; return View(clist.ToList()); } } in MVC View: @model List<RM.Models.C> @{ foreach (RM.Models.C c in Model) { @String.Format("{0:dd-MMM-yyyy}", c.FromDate) <span>-</span> @String.Format("{0:dd-MMM-yyyy}", c.ToDate) <br /> } } When I run all this, it outputs this: 01-Jan-2010 - 01-Feb-2010 01-Jan-2010 - 01-Feb-2010 01-Aug-2010 - 01-Sep-2010 01-Aug-2010 - 01-Sep-2010 When it should do this (this is what the view returns): 01-Jan-2010 - 01-Feb-2010 15-Jun-2010 - 15-Jul-2010 01-Aug-2010 - 01-Sep-2010 01-Dec-2010 - 31-Dec-2010 Also, I've run the SQL profiler over it and according to that, the query being executed is: SELECT [Extent1].[ID] AS [ID], [Extent1].[FromDate] AS [FromDate], [Extent1].[ToDate] AS [ToDate] FROM (SELECT [C].[ID] AS [ID], [C].[FromDate] AS [FromDate], [C].[ToDate] AS [ToDate] FROM [dbo].[C] AS [C]) AS [Extent1] Which returns the correct data So it seems that the entity framework is doing something to the data in the meantime. To me, everything looks fine! Have I missed something? Cheers, Ben

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >