Search Results

Search found 10966 results on 439 pages for 'kevin db'.

Page 376/439 | < Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >

  • Flexible design - customizable entity model, UI and workflow

    - by Ngm
    Hi All, I want to achieve the following aspects in the software I am building: 1. Customizable entity model 2. Customizable UI 3. Customizable workflow I have thought about an approach to achieve this, I want you to review this and make suggestions: Entity objects should be plain objects and will hold just data Separate Entity model and DB Schema by using an framework (like NHibernate?). This will allow easy modification of entity objects. Business logic to fetch/modify entities has to be granular enough so that they can be invoked as part of the workflow. Business objects should not hold any state, and hence will contain only static methods The workflow will decide depending upon the "state" of an entity/entities which methods on business object/objects to invoke. The workflow should obtain the results of the processing and then pass on the business objects to the appropriate UI screen. The UI screen has to contain instructions about how to display a given entity/entites. Possibly the UI has to be generated dynamically based on a set of UI instructions. (like XUL) What do you think about this approach? Suggest which existing frameworks (like NHiberante, Window Workflow) fit into this model, so that I will not spend time on coding these frameworks Also suggest is there any asp.net framework that can generate dynamic asp.net ajax pages based on a set of UI instructions (like Mozilla XUL)? I have recently been exploring Apache Ofbiz and was impressed by its ability to customize most areas of the application: UI, workflow, entities. Is there any similar (not necessarily an ERP system) application developed in C#/.Net which offers a similar level of customization? I am looking for examples of applications developed in C# that are highly customizable in terms of UI, Workflow and Entity Model

    Read the article

  • CakePHP's list problem

    - by jun
    Hi there. I have this table in my DB: Group - ID-Name - 1 -abc - 2 -def - 3 -ghi Pages - id-group_id-name - 1 -1 -home - 2 -1 -about us Now I wanted to make a select box that groups them by 'group' using: function add() { $this->set('pages', $this->Page->find('list', array('fields' => array('Page.id', 'Page.name', 'Page.group_id')))); } In my add.ctp: echo $form->input('group_id', array('options' => $pages)); The output: <select name="data[Page][id]" id="PageId"> <optgroup label="1"> <option value="1">Home</option> <option value="2">About Us</option> </optgroup> </select> I wanted the optgroup to display the actual group name not the group id like: <select name="data[Page][id]" id="PageId"> <optgroup label="abc"> <option value="1">Home</option> <option value="2">About Us</option> </optgroup> </select> I have tried this one: $this->Page->find('list', array('conditions' => 'Group.id = Page.id', 'fields' => array('Page.id', 'Page.name', 'Group.name'))); But 'Group.id' and 'Group.name' is unknown. Thanks!

    Read the article

  • Best architecture for a social media app

    - by Sky
    Hey guys, Im working on promising project that develops a new social media app for web and mobile. We are at begin defining functionalities. Nevertheless, I'm thinking ahead on architecture. So I'm asking: 1 - Whats the best plataform to develop the core of this aplication that will have a Rest API interface. 2 - Whats the best database that will scale and grow with my application. As far as I researched, these were the answers I found most interesting: For database: Cassandra NoSQL DB, amazing scalabilty, amazing write performance, good read performance (will be improved on 0.6). I think i will choose that one. Zookeer for transactions on Cassandra. I think that 2 technologies rly good for that propose. What do you think guys? On the front end that will serve the REST API, i dont have a final candidate. For this one i have questions based on Perfomance X Scalabilty X Fast Development/Maintenance. Java or .Net As far as I researched, brings the best balance of this requisits. Python, pearl and Rail, has the best (Fast Development/Maintenance), but sux on all other. C or C++ I dont even consider, because its (Fast Development/Maintenance) sux... So what do you guy think about it?

    Read the article

  • asp.net mvc insert doesnt seem to work for me....

    - by Pandiya Chendur
    My controller's call to repository insert method all the values are passed but it doesn't get inserted in my table.. My controller method, [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create([Bind(Exclude = "Id")]FormCollection collection) { try { MaterialsObj materialsObj = new MaterialsObj(); materialsObj.Mat_Name = collection["Mat_Name"]; materialsObj.Mes_Id = Convert.ToInt64(collection["MeasurementType"]); materialsObj.Mes_Name = collection["Mat_Type"]; materialsObj.CreatedDate = System.DateTime.Now; materialsObj.CreatedBy = Convert.ToInt64(1); materialsObj.IsDeleted = Convert.ToInt64(1); consRepository.createMaterials(materialsObj); return RedirectToAction("Index"); } catch { return View(); } } and my repository, public MaterialsObj createMaterials(MaterialsObj materialsObj) { Material mat = new Material(); mat.Mat_Name = materialsObj.Mat_Name; mat.Mat_Type = materialsObj.Mes_Name; mat.MeasurementTypeId = materialsObj.Mes_Id; mat.Created_Date = materialsObj.CreatedDate; mat.Created_By = materialsObj.CreatedBy; mat.Is_Deleted = materialsObj.IsDeleted; db.Materials.InsertOnSubmit(mat); return materialsObj; } What am i missing here any suggestion....

    Read the article

  • How to architect Rails site that can be edited while running?

    - by Chris Kimpton
    Hi, I am writing a Rails app that "scrapes/navigates" some other websites and webservices for content. I am using Mechanize and Savon to do the heavylifting. But given the dynamic nature of the web, I'd like to make my calls to these editable by the admin users of the site - rather than requiring me to release a new version of the site. The actual scraping thread happens async to the website, using the daemons gem. My requirements are: Thinking that the scraping/webservice calling code is quite simple, the easiest route is to make the whole class editable by the admins. Keep a history of the scraping code - so that we can fairly easily revert if we introduce a problem. Initially use the code from the file system, but as soon as thats been edited and stored somewhere, to use that code instead. I am thinking my options are: Store the code in the db (with a history table for the old versions) Store the code in a private git repo somewhere and access that for the history/latest versions. I am thinking the git route might be easiest, given its raison d'etre is to track file history... But perhaps there is a gem/plugin that does all this for me, out of the box? Thanks in advance for any tips/advice. ~chris

    Read the article

  • Bus Timetable database design

    - by paddydub
    hi, I'm trying to design a db to store the timetable for 300 different bus routes, Each route has a different number of stops and different times for Monday-Friday, Saturday and Sunday. I've represented the bus departure times for each route as follows, I'm not sure if i should have null values in the table, does this look ok? route,Num,Day, t1, t2, t3, t4 t5 t6 t7 t8 t9 t10 117, 1, Monday, 9:00, 9:30, 10:50, 12:00, 14:00 18:00 19:00 null null null 117, 2, Monday, 9:03, 9:33, 10:53, 12:03, 14:03 18:03 19:03 null null null 117, 3, Monday, 9:06, 9:36, 10:56, 12:06, 14:06 18:06 19:06 null null null 117, 4, Monday, 9:09, 9:39, 10:59, 12:09, 14:09 18:09 19:09 null null null . . . 117, 20, Monday, 9:39, 10.09, 11:39, 12:39, 14:39 18:39 19:39 null null null 119, 1, Monday, 9:00, 9:30, 10:50, 12:00, 14:00 18:00 19:00 20:00 21:00 22:00 119, 2, Monday, 9:03, 9:33, 10:53, 12:03, 14:03 18:03 19:03 20:03 21:03 22:03 119, 3, Monday, 9:06, 9:36, 10:56, 12:06, 14:06 18:06 19:06 20:06 21:06 22:06 119, 4, Monday, 9:09, 9:39, 10:59, 12:09, 14:09 18:09 19:09 20:09 21:09 22:09 . . . 119, 37, Monday, 9:49, 9:59, 11:59, 12:59, 14:59 18:59 19:59 20:59 21:59 22:59 139, 1, Sunday, 9:00, 9:30, 20:00 21:00 22:00 null null null null null 139, 2, Sunday, 9:03, 9:33, 20:03 21:03 22:03 null null null null null 139, 3, Sunday, 9:06, 9:36, 20:06 21:06 22:06 null null null null null 139, 4, Sunday, 9:09, 9:39, 20:09 21:09 22:09 null null null null null . . . 139, 20, Sunday, 9:49, 9:59, 20:59 21:59 22:59 null null null null null

    Read the article

  • How does one convert from a Java resultset to ColdFusion query in Railo?

    - by Shawn Grigson
    The following works fine in CFMX 7 and CF8, and I'd assume CF9 as well: <!--- 'conn' is a JDBC connection ---> <cfset stat = conn.createStatement() /> <cfset rs = stat.executeQuery(trim(arguments.sql)) /> <!--- convert this Java resultset to a CF query recordset ---> <cfset queryTable = CreateObject("java", "coldfusion.sql.QueryTable")> <cfset queryTable.init(rs) > <cfset query = queryTable.FirstTable() /> This creates a statement using a JDBC driver, executes a query against it, putting it into a java resultset, and then coldfusion.sql.QueryTable is instantiated, passed the Java resulset object, and then queryTable.FirstTable() is called, which returns an actual coldfusion resultset (for cfloop and the like). The problem comes with a difference in Railo's implementation. Running this code in Railo returns the following error: No matching Constructor for coldfusion.sql.QueryTable(org.sqlite.RS) found. I've dumped the Railo java object, and don't see init() among the methods. Am I missing something simple? I'd love to get this working in Railo as well. Please note: I am doing a DSN-less connection to a SQLite db. I understand how to set up a CF datasource. My only hiccup at this point is doing the translation from a Java result set to a Railo query.

    Read the article

  • MySQL can't access root account or reset with mysqladmin

    - by glumptious
    So if I type mysql -u root I'm supposedly logged in, however upon trying to create or access a database I get this lovely error: ERROR 1044 (42000): Access denied for user ''@'localhost' to database 'test1'. I haven't the foggiest idea why after logging in as root it's trying access DB's as ''@'localhost' and it's driving me a bit crazy right now. Possibly related, when I try to set the root password I get the error mysqladmin: Can't turn off logging; error: 'Access denied; you need (at least one of) the SUPER privilege(s) for this operation'. I've tried removing mysql-server via running apt-get purge mysql-server and then reinstalling with no luck. This is running Ubuntu Server 12.10 64-bit and mysql is indeed running. --Edit-- I wonder if perhaps there is no root user. So I try to start MySQL with --skip-grant-tables and the create the root user but then I'm given this: ERROR 1290 (HY000): The MySQL server is running with the --skip-grant-tables option so it cannot execute this statement. Fun fun fun fun fun.

    Read the article

  • jeditable not updating browser display - leaves "click to edit..." after succesful edit

    - by Enoch
    I am using jeditable fairly simply and it all works fine, updates the database, etc. The only problem I have is after the user types the new value data and hits enter it doesn't update the field in the browser to show the new value - instead it puts "Click to edit..." in place of it. Am I missing something like a return value from my php file? the php fucntion just takes the args and updates the database - and it works fine. Enoch the jquery\jeditable code $('.edit').editable('update.php',{ id: 'field', name: 'val', indicator: 'Saving...', tooltip: 'Click to edit...', select : true, submitdata : { db : "pers", kn : "key", rec : "?php echo $rec; ?" } }); the div <div class="edit" id="svc_ad1"><?php echo $row-svc_ad1; ?>< /div> i also have a css class for pEdit edit{ float:left; width:200px; height:15px; margin-bottom:5px; border-bottom:1px solid #aaaaaa; }

    Read the article

  • EF4 and multiple abstract levels

    - by Cedric
    I need to use inheritance with EF4 and the TPH model created from DB. I created a new projet to test simples classes. There is my class model: There is my table in SQL SERVER 2008 : VEHICLE ID : int PK Owner : varchar(50) Consumption : float FirstCirculationDate : date Type : varchar(50) Discriminator : varchar(10) I added a condition in my EDMX on the Discriminator field to differentiate the Scooter, Car, Motorbike and Bike entities. MotorizedVehicle and Vehicle are Abstract. But when I compile, this error appears : Error 3032: Problem in mapping fragments starting at lines 78, 85:EntityTypes EF4InheritanceModel.Scooter, EF4InheritanceModel.Motorbike, EF4InheritanceModel.Car, EF4InheritanceModel.Bike are being mapped to the same rows in table Vehicle. Mapping conditions can be used to distinguish the rows that these types are mapped to. Edit : To Ladislav : I try it and error change to become it for all of my entities : Error 3034: Problem in mapping fragments starting at lines 72, 86:An entity is mapped to different rows within the same table. Ensure these two mapping fragments do not map two groups of entities with overlapping keys to two distinct groups of rows. To Henk (with Ladislay suggestion) : There are all of mappings details : What's wrong ? Thanks

    Read the article

  • Architecture with NHibernate and Repositories

    - by Matthew
    I've been reading up on MVC 2 and the recommended patterns, so far I've come to the conclusion (amongst much hair pulling and total confusion) that: Model - Is just a basic data container Repository - Provides data access Service - Provides business logic and acts as an API to the Controller The Controller talks to the Service, the Service talks to the Repository and Model. So for example, if I wanted to display a blog post page with its comments, I might do: post = PostService.Get(id); comments = PostService.GetComments(post); Or, would I do: post = PostService.Get(id); comments = post.Comments; If so, where is this being set, from the repository? the problem there being its not lazy loaded.. that's not a huge problem but then say I wanted to list 10 posts with the first 2 comments for each, id have to load the posts then loop and load the comments which becomes messy. All of the example's use "InMemory" repository's for testing and say that including db stuff would be out of scope. But this leaves me with many blanks, so for a start can anyone comment on the above?

    Read the article

  • How to avoid loading a LINQ to SQL object twice when editting it on a website.

    - by emzero
    Hi guys I know you are all tired of this Linq-to-Sql questions, but I'm barely starting to use it (never used an ORM before) and I've already find some "ugly" things. I'm pretty used to ASP.NET Webforms old school developing, but I want to leave that behind and learn the new stuff (I've just started to read a ASP.NET MVC book and a .NET 3.5/4.0 one). So here's is one thing I didn't like and I couldn't find a good alternative to it. In most examples of editing a LINQ object I've seen the object is loaded (hitting the db) at first to fill the current values on the form page. Then, the user modify some fields and when the "Save" button is clicked, the object is loaded for second time and then updated. Here's a simplified example of ScottGu NerdDinner site. // // GET: /Dinners/Edit/5 [Authorize] public ActionResult Edit(int id) { Dinner dinner = dinnerRepository.GetDinner(id); return View(new DinnerFormViewModel(dinner)); } // // POST: /Dinners/Edit/5 [AcceptVerbs(HttpVerbs.Post), Authorize] public ActionResult Edit(int id, FormCollection collection) { Dinner dinner = dinnerRepository.GetDinner(id); UpdateModel(dinner); dinnerRepository.Save(); return RedirectToAction("Details", new { id=dinner.DinnerID }); } As you can see the dinner object is loaded two times for every modification. Unless I'm missing something about LINQ to SQL caching the last queried objects or something like that I don't like getting it twice when it should be retrieved only one time, modified and then comitted back to the database. So again, am I really missing something? Or is it really hitting the database twice (in the example above it won't harm, but there could be cases that getting an object or set of objects could be heavy stuff). If so, what alternative do you think is the best to avoid double-loading the object? Thank you so much, Greetings!

    Read the article

  • Rails 2.3.2 trying to render ERB instead of HAML

    - by c00lryguy
    Rails is suddenly trying to render ERB instead of Haml and I can't figure out why. I've created new rails projects, reinstalled Haml, and reinstalled Rails. Here's exactly the steps I take when making my application (Rails 2.3.2): rails> rails test rails> cd test rails\test> haml --rails . rails\test> ruby script\generate model user email:string password:string rails\test> ruby script\generate controller users index rails\test> rake db:migrate Here's what the UsersController looks like: class UsersController < ApplicationController def index @users = User.all end end My routes: ActionController::Routing::Routes.draw do |map| map.resources :users end I now create views\users\index.html.haml: %table %th(style="text-align: left;") %h1 Users - for user in @users %tr %td= user.email %td= user.password Annnd run the server... I navigate to localhost:3000\users and I get this error message: Template is missing Missing template users/index.erb in view path app/views For some reason Rails is trying to find and render .erb files instead of .haml files. vendor\plugins\haml\init.rb exists, untouched. I've reinstalled Haml (Pretty Penny) multiple times and still get the same results. I've also tried adding config.gem 'haml' to my environment.rb but this also doesn't work. I can't figure out why suddenly rails will not render haml for me.

    Read the article

  • Daily, Weekly and Monthly Page View Counter

    - by Jens Fahnenbruck
    I'm building a website with user generated content. On the home page I want to show a list of all created items, and I want to be able to sort them by a view counter. That's sound easy, but I want multiple counters. I want to know which was the most visited item in the last day, last week or last months or overall. My first Idea was to create 4 counter columns in the item's DB-Table. One for each of daily, weekly, monthly and overall, and the create a cron job, that clears the daily counter every 24 hours, the weekly counter every 7 days and so on. But my problem with this is, what happens if I want to know which was the most viewed item of the week, just after the weekly counter got cleared? What I need is an efficient way to create a continous counter, which got reduced for every page view that is too old, and increased for every new page view. Right now I'm thinking of a solution with the redis server, but I have no solution yet. I'm just looking for a general idea here, but FYI I'm developing this application in Ruby on Rails.

    Read the article

  • How to make a good programming interview?

    - by luckyluke
    I am doing interviews with from time to time to recruit some not bad people. And I really think I AM NOT doing to correct Job. I work in a company when We have to do a lot o DB programming, .NET programming, Java programming, so we need people who are open minded and not focused on a particular tech. Afterall language is a notation, You have to understand what is going under the hood. I ask people about their project, ask them some coding questions (believe me a SQL question involving a CROSS JOIN is hard), let them write some code, ask them about oo design, ask them how they update their knowledge, and stay up to date, do they have FUN when they code (at least sometimes). Hell I even give them a coding solution for home (3 hours max) to see how they think and code. And yet my hit rate at hiring junior member (those who live over the initial 3 months) is just about 33%. So my question, how do YOU make the good interviews, because I think my hit rate is to low? Do you have any best-practices(should be at least 60-70%)? p.s. And i noticed that: the best programmers are lazy, but motivated, just being lazy is not enough:) But people who write the best code are attentive to details:)

    Read the article

  • NHibernate.MappingException - Troubles Shooting Checklist (no persister for)

    - by Berryl
    Here's a starter list: 1) if hbm is hand generated, is it an embedded resource? 2) if using FNH, does it pass a PerssistenceSpecification test? 3) if not using FNH, can you save and then load the persisted class? 4) more? I'm sure many of you have gotten this one at one point or another. But have you ever gotten it when you knew your mapping was set up correctly? I started getting this exception after I started using a new repository design, but only in one scenario! PersistenceSpecification tests pass, as do all repository methods (using SQLite). The scenario that leads to the exception is when legacy projects from a different db are converted to green field system. The legacy system is from a different database and has it's own session factory, which should be irrelevant because the error comes after previously unconverted Projects are retrieved and in memory. As the routine tries to save these unconverted Projects into the new database, the exception is thrown, full stack trace below. Any ideas on how to build up the trouble shooting check list and solves this problem? Cheers, Berryl === the Exception trace ===== failed: NHibernate.MappingException : No persister for: Smack.ConstructionAdmin.Domain.Model.Projects.Project at NHibernate.Impl.SessionFactoryImpl.GetEntityPersister(String entityName) at NHibernate.Impl.SessionImpl.GetEntityPersister(String entityName, Object obj) at NHibernate.Engine.ForeignKeys.IsTransient(String entityName, Object entity, Nullable`1 assumed, ISessionImplementor session) at NHibernate.Event.Default.AbstractSaveEventListener.GetEntityState(Object entity, String entityName, EntityEntry entry, ISessionImplementor source) at NHibernate.Event.Default.DefaultSaveOrUpdateEventListener.PerformSaveOrUpdate(SaveOrUpdateEvent event) at NHibernate.Event.Default.DefaultSaveOrUpdateEventListener.OnSaveOrUpdate(SaveOrUpdateEvent event) at NHibernate.Impl.SessionImpl.FireSaveOrUpdate(SaveOrUpdateEvent event) at NHibernate.Impl.SessionImpl.SaveOrUpdate(Object obj) NHibernate\Repository\FabioNHibRepository.cs(46,0): at Smack.Core.Data.NHibernate.Repository.FabioNHibRepository`1.Add(T item) LegacyConversion\LegacyBatchUpdater.cs(20,0): at Smack.ConstructionAdmin.Data.LegacyConversion.LegacyBatchUpdater.ConvertOpenLegacyProjects(ILegacyProjectDao legacyProjectDao, IProjectRepository greenProjectRepository) Data\Brownfield\ProjectBatchUpdate_SQLiteTests.cs(19,0): at Smack.ConstructionAdmin.Tests.Data.Brownfield.ProjectBatchUpdate_SQLiteTests.Test()

    Read the article

  • Categories of tags

    - by Peter Rowell
    I'm starting a pro bono project that is the web interface to the world's largest collection of lute music and it's a challenging collection from several points of view. The pieces are largely from 1400 to 1600, but they range from the mid-1200's to present day. Needless to say, there is tremendous variability in how the pieces are categorized and who they are attributed to. It is obvious that any sort of rigid, DB-enforced hierarchy isn't going to work with this collection, so my thoughts turn to tags. But not all tags are the same. I'll have tags that represent a person/role (composer, translator, entabulator, etc.), tags that represent the instrument(s) the piece in written for, and tags that represent how the piece has been classified by any one of half a dozen different classification systems used over the centuries. We will be using a semi-controlled tag vocabulary to prevent runaway tag proliferation (e.g. del.icio.us), but I want to treat the tags as belonging to different groups. People tags should not be offered when the editor is doing instrument tagging, etc. Has anyone done something like this? I have several ways I can think of to do it, but if there is an existing system that is well-done it would save me time implementing/debugging. FWIW: This is a Django system and I'm looking at starting with Django-tagging and then hacking from there, possibly adding a category field or ...

    Read the article

  • Using Active Directory to authenticate users in a WWW facing website

    - by Basiclife
    Hi, I'm looking at starting a new web app which needs to be secure (if for no other reason than that we'll need PCI accreditation at some point). From previous experience working with PCI (on a domain), the preferred method is to use integrated windows authentication which is then passed all the way through the app to the database. This allows for better auditing as well as object-level permissions (ie an end user can't read the credit card table). There are advantages in that even if someone compromises the webserver, they won't be able to glean any additional information from the database. Also, the webserver isn't storing any database credentials (beyond perhaps a simple anonymous user with very few permissions) So, now I'm looking at the new web app which will be on the public internet. One suggestion is to have a Active Directory server and create windows accounts on the AD for each user of the site. These users will then be placed into the appropriate NT groups to decide which DB permissions they should have (and which pages they can access). ASP already provides the AD membership provider and role provider so this should be fairly simple to implement. There are a number of questions around this - Scalability, reliability, etc... and I was wondering if there is anyone out there with experience of this approach or, even better, some good reasons why to do it / not to do it. Any input appreciated Regards Basiclife

    Read the article

  • Mapping self-table one-to-many using non-PK clolumns

    - by Harel Moshe
    Hey, i have a legacy DB to which a Person object is mapped, having a collection of family-members, like this: class Person { ... string Id; /* 9-digits string */ IList<Person> Family; ... } The PERSON table seems like: Id: CHAR(9), PK FamilyId: INT, NOT NULL and several other non-relevant columns. I'm trying to map the Family collection to the PERSON table using the FamilyId column, which is not the PK as mentioned above. So, i actually have a one-to-many which is self-table-referential. I'm getting an error saying 'Cast is not valid' when my mapping looks like this: ... <set name="Family" table="Person" lazy="false"> <key column="FamilyId" /> <one-to-many class="Person" /> </set> ... because obviously, the join NHibernate is trying to make is between the PK column, Id, and the 'secondary' column, FamilyId, instead of joining the FamilyId column to itself. Any ideas please?

    Read the article

  • GD PHP Base64 Picture (png) error

    - by hogofwar
    This is part of my code: $con = mysql_connect("localhost","username","passs"); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("database", $con); if(mysql_num_rows(mysql_query("SELECT name FROM xbox_user WHERE name = '$user'"))){ // Code inside if block if userid is already there $result = mysql_query("SELECT name FROM xbox_user WHERE name = '$user'"); while($row = mysql_fetch_array($result)) { if ($row['date'] > $row['date']+100){ $src = imagecreatefrompng($result['XboxInfo']['TileUrl']); $base64= base64_encode(file_get_contents($result['XboxInfo']['TileUrl'])); $date = date("Ymd"); mysql_query("UPDATE xbox_user SET date = '$date' SET avatar = '$base64' WHERE name = '$user'"); }else{ $encode = $row['avatar']; //echo $encode; $rand = rand(1, 1337); file_put_contents('/tmp/'.$rand.'.png', base64_decode($row['avatar'])); //ERROR LINE $src = imagecreatefrompng('/tmp/'.$rand.'.png'); unlink('/tmp/'.$rand.'.png'); } } }else{ $src = imagecreatefrompng($result['XboxInfo']['TileUrl']); $base64= base64_encode(file_get_contents($result['XboxInfo']['TileUrl'])); $date = date("Ymd"); mysql_query("INSERT INTO xbox_user (name, avatar, date) VALUES ('$user', '$base64', '$date')"); } It comes up with multiple errors but I feel this one should be addressed first as the other could just be caused by the first error: Warning: imagecreatefrompng() [function.imagecreatefrompng]: '/tmp/628.png' is not a valid PNG file in /home/nah/public_html/experiment/xbox/draw3.php on line 60 It also does create an entry in my mysql DB

    Read the article

  • Extending EF4 SQL Generation

    - by Basiclife
    Hi, We're using EF4 in a fairly large system and occasionally run into problems due to EF4 being unable to convert certain expressions into SQL. At present, we either need to do some fancy footwork (DB/Code) or just accept the performance hit and allow the query to be executed in-memory. Needless to say neither of these is ideal and the hacks we've sometimes had to use reduce readability / maintainability. What we would ideally like is a way to extend the SQL generation capabilities of the EF4 SQL provider. Obviously there are some things like .Net method calls which will always have to be client-side but some functionality like date comparisons (eg Group by weeks in Linq to Entities ) should be do-able. I've Googled but perhaps I'm using the wrong terminology as all I get is information about the new features of EF4 SQL Generation. For such a flexible and extensible framework, I'd be surprised if this isn't possible. In my head, I'm imagining inheriting from the [SQL 2008] provider and extending it to handle additional expressions / similar in the expression tree it's given to convert to SQL. Any help/pointers appreciated. We're using VS2010 Ultimate, .Net 4 (non-client profile) and EF4. The app is in ASP.Net and is running in a 64-Bit environment in case it makes a difference.

    Read the article

  • Debugging Django project problem.

    - by Wasim
    Hi all, I asked this question before, but had no replies, maybe I wasn't so clear. I'm trying to debug a django project using MySQL database. If I run the admin or trying to use the shell to communicate to the data base every thing is well and I can do every thing. I installed MySQLdb for Python 2.6. I installed PyDev on my Apatana studio. Configured the Debugging with runserver 8001 --noreload. When I start debugging , When I arrive to the following code in C:\Python26\Lib\site-packages\django\db\backends\mysql\base.py try: import MySQLdb as Database except ImportError, e: from django.core.exceptions import ImproperlyConfigured raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) I get an import error : django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: DLL load failed: The specified module could not be found. I trying to ge more deeply with the import MySQLdb as Database line , it goes to the C:\Python26\Lib\site-packages\MySQLdb__init__.py and fail in the line import _mysql. I can't understand the problem. When running the Django admin every thing is ok, but with debugging it fails to work. Any help please. Thanks in advance.

    Read the article

  • Slowdowns when reading from an urlconnection's inputstream (even with byte[] and buffers)

    - by user342677
    Ok so after spending two days trying to figure out the problem, and reading about dizillion articles, i finally decided to man up and ask to for some advice(my first time here). Now to the issue at hand - I am writing a program which will parse api data from a game, namely battle logs. There will be A LOT of entries in the database(20+ million) and so the parsing speed for each battle log page matters quite a bit. The pages to be parsed look like this: http://api.erepublik.com/v1/feeds/battle_logs/10000/0. (see source code if using chrome, it doesnt display the page right). It has 1000 hit entries, followed by a little battle info(lastpage will have <1000 obviously). On average, a page contains 175000 characters, UTF-8 encoding, xml format(v 1.0). Program will run locally on a good PC, memory is virtually unlimited(so that creating byte[250000] is quite ok). The format never changes, which is quite convenient. Now, I started off as usual: //global vars,class declaration skipped public WebObject(String url_string, int connection_timeout, int read_timeout, boolean redirects_allowed, String user_agent) throws java.net.MalformedURLException, java.io.IOException { // Open a URL connection java.net.URL url = new java.net.URL(url_string); java.net.URLConnection uconn = url.openConnection(); if (!(uconn instanceof java.net.HttpURLConnection)) { throw new java.lang.IllegalArgumentException("URL protocol must be HTTP"); } conn = (java.net.HttpURLConnection) uconn; conn.setConnectTimeout(connection_timeout); conn.setReadTimeout(read_timeout); conn.setInstanceFollowRedirects(redirects_allowed); conn.setRequestProperty("User-agent", user_agent); } public void executeConnection() throws IOException { try { is = conn.getInputStream(); //global var l = conn.getContentLength(); //global var } catch (Exception e) { //handling code skipped } } //getContentStream and getLength methods which just return'is' and 'l' are skipped Here is where the fun part began. I ran some profiling (using System.currentTimeMillis()) to find out what takes long ,and what doesnt. The call to this method takes only 200ms on avg public InputStream getWebPageAsStream(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; WebObject wobj = new WebObject(url, 10000, 10000, true, "Mozilla/5.0 " + "(Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729)"); wobj.executeConnection(); l = wobj.getContentLength(); // global variable return wobj.getContentStream(); //returns 'is' stream } 200ms is quite expected from a network operation, and i am fine with it. BUT when i parse the inputStream in any way(read it into string/use java XML parser/read it into another ByteArrayStream) the process takes over 1000ms! for example, this code takes 1000ms IF i pass the stream i got('is') above from getContentStream() directly to this method: public static Document convertToXML(InputStream is) throws ParserConfigurationException, IOException, SAXException { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(is); doc.getDocumentElement().normalize(); return doc; } this code too, takes around 920ms IF the initial InputStream 'is' is passed in(dont read into the code itself - it just extracts the data i need by directly counting the characters, which can be done thanks to the rigid api feed format): public static parsedBattlePage convertBattleToXMLWithoutDOM(InputStream is) throws IOException { // Point A BufferedReader br = new BufferedReader(new InputStreamReader(is)); LinkedList ll = new LinkedList(); String str = br.readLine(); while (str != null) { ll.add(str); str = br.readLine(); } if (((String) ll.get(1)).indexOf("error") != -1) { return new parsedBattlePage(null, null, true, -1); } //Point B Iterator it = ll.iterator(); it.next(); it.next(); it.next(); it.next(); String[][] hits_arr = new String[1000][4]; String t_str = (String) it.next(); String tmp = null; int j = 0; for (int i = 0; t_str.indexOf("time") != -1; i++) { hits_arr[i][0] = t_str.substring(12, t_str.length() - 11); tmp = (String) it.next(); hits_arr[i][1] = tmp.substring(14, tmp.length() - 9); tmp = (String) it.next(); hits_arr[i][2] = tmp.substring(15, tmp.length() - 10); tmp = (String) it.next(); hits_arr[i][3] = tmp.substring(18, tmp.length() - 13); it.next(); it.next(); t_str = (String) it.next(); j++; } String[] b_info_arr = new String[9]; int[] space_nums = {13, 10, 13, 11, 11, 12, 5, 10, 13}; for (int i = 0; i < space_nums.length; i++) { tmp = (String) it.next(); b_info_arr[i] = tmp.substring(space_nums[i] + 4, tmp.length() - space_nums[i] - 1); } //Point C return new parsedBattlePage(hits_arr, b_info_arr, false, j); } I have tried replacing the default BufferedReader with BufferedReader br = new BufferedReader(new InputStreamReader(is), 250000); This didnt change much. My second try was to replace the code between A and B with: Iterator it = IOUtils.lineIterator(is, "UTF-8"); Same result, except this time A-B was 0ms, and B-C was 1000ms, so then every call to it.next() must have been consuming some significant time.(IOUtils is from apache-commons-io library). And here is the culprit - the time taken to parse the stream to string, be it by an iterator or BufferedReader in ALL cases was about 1000ms, while the rest of the code took 0ms(e.g. irrelevant). This means that parsing the stream to LinkedList, or iterating over it, for some reason was eating up a lot of my system resources. question was - why? Is it just the way java is made...no...thats just stupid, so I did another experiment. In my main method I added after the getWebPageAsStream(): //Point A ba = new byte[l]; // 'l' comes from wobj.getContentLength above bytesRead = is.read(ba); //'is' is our URLConnection original InputStream offset = bytesRead; while (bytesRead != -1) { bytesRead = is.read(ba, offset - 1, l - offset); offset += bytesRead; } //Point B InputStream is2 = new ByteArrayInputStream(ba); //Now just working with 'is2' - the "copied" stream The InputStream-byte[] conversion took again 1000ms - this is the way many ppl suggested to read an InputStream, and stil it is slow. And guess what - the 2 parser methods above (convertToXML() and convertBattlePagetoXMLWithoutDOM(), when passed 'is2' instead of 'is' took, in all 4 cases, under 50ms to complete. I read a suggestion that the stream waits for connection to close before unblocking, so i tried using HttpComponentsClient 4.0 (http://hc.apache.org/httpcomponents-client/index.html) instead, but the initial InputStream took just as long to parse. e.g. this code: public InputStream getWebPageAsStream2(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget = new HttpGet(url); HttpParams p = new BasicHttpParams(); HttpConnectionParams.setSocketBufferSize(p, 250000); HttpConnectionParams.setStaleCheckingEnabled(p, false); HttpConnectionParams.setConnectionTimeout(p, 5000); httpget.setParams(p); HttpResponse response = httpclient.execute(httpget); HttpEntity entity = response.getEntity(); l = (int) entity.getContentLength(); return entity.getContent(); } took even longer to process(50ms more for just the network) and the stream parsing times remained the same. Obviously it can be instantiated so as to not create HttpClient and properties every time(faster network time), but the stream issue wont be affected by that. So we come to the center problem - why does the initial URLConnection InputStream(or HttpClient InputStream) take so long to process, while any stream of same size and content created locally is orders of magnitude faster? I mean, the initial response is already somewhere in RAM, and I cant see any good reasong why it is processed so slowly compared to when a same stream is just created from a byte[]. Considering I have to parse million of entries and thousands of pages like that, a total processing time of almost 1.5s/page seems WAY WAY too long. Any ideas? P.S. Please ask in any more code is required - the only thing I do after parsing is make a PreparedStatement and put the entries into JavaDB in packs of 1000+, and the perfomance is ok ~ 200ms/1000entries, prb could be optimized with more cache but I didnt look into it much.

    Read the article

  • Can't bind string containing @ char with mysqli_stmt_bind_param

    - by Tirithen
    I have a problem with my database class. I have a method that takes one prepared statement and any number of parameters, binds them to the statement, executes the statement and formats the result into a multidimentional array. Everthing works fine until I try to include an email adress in one of the parameters. The email contains an @ character and that one seems to break everything. When I supply with parameters: $types = "ss" and $parameters = array("[email protected]", "testtest") I get the error: Warning: Parameter 3 to mysqli_stmt_bind_param() expected to be a reference, value given in ...db/Database.class.php on line 63 Here is the method: private function bindAndExecutePreparedStatement(&$statement, $parameters, $types) { if(!empty($parameters)) { call_user_func_array('mysqli_stmt_bind_param', array_merge(array($statement, $types), &$parameters)); /*foreach($parameters as $key => $value) { mysqli_stmt_bind_param($statement, 's', $value); }*/ } $result = array(); $statement->execute() or debugLog("Database error: ".$statement->error); $rows = array(); if($this->stmt_bind_assoc($statement, $row)) { while($statement->fetch()) { $copied_row = array(); foreach($row as $key => $value) { if($value !== null && mb_substr($value, 0, 1, "UTF-8") == NESTED) { // If value has a nested result inside $value = mb_substr($value, 1, mb_strlen($value, "UTF-8") - 1, "UTF-8"); $value = $this->parse_nested_result_value($value); } $copied_row[$ke<y] = $value; } $rows[] = $copied_row; } } // Generate result $result['rows'] = $rows; $result['insert_id'] = $statement->insert_id; $result['affected_rows'] = $statement->affected_rows; $result['error'] = $statement->error; return $result; } I have gotten one suggestion that: the array_merge is casting parameter to string in the merge change it to &$parameters so it remains a reference So I tried that (3rd line of the method), but it did not do any difference. How should I do? Is there a better way to do this without call_user_func_array?

    Read the article

  • How to implement IEquatable<T> when mutable fields are part of the equality - Problem with GetHashCo

    - by Shimmy
    Hello! I am using Entity Framework in my application. I implemented with the partial class of an entity the IEquatable<T> interface: Partial Class Address : Implements IEquatable(Of Address) 'Other part generated Public Overloads Function Equals(ByVal other As Address) As Boolean _ Implements System.IEquatable(Of Address).Equals If ReferenceEquals(Me, other) Then Return True Return AddressId = other.AddressId End Function Public Overrides Function Equals(ByVal obj As Object) As Boolean If obj Is Nothing Then Return MyBase.Equals(obj) If TypeOf obj Is Address Then Return Equals(DirectCast(obj, Address)) Else Return False End Function Public Overrides Function GetHashCode() As Integer Return AddressId.GetHashCode End Function End Class Now in my code I use it this way: Sub Main() Using e As New CompleteKitchenEntities Dim job = e.Job.FirstOrDefault Dim address As New Address() job.Addresses.Add(address) Dim contains1 = job.Addresses.Contains(address) 'True e.SaveChanges() Dim contains2 = job.Addresses.Contains(address) 'False 'The problem is that I can't remove it: Dim removed = job.Addresses.Remoeve(address) 'False End Using End Sub Note (I checked in the debugger visualizer) that the EntityCollection class stores its entities in HashSet so it has to do with the GetHashCode function, I want it to depend on the ID so entities are compared by their IDs. The problem is that when I hit save, the ID changes from 0 to its db value. So the question is how can I have an equatable object, being properly hashed. Please help me find what's wrong in the GetHashCode function (by ID) and what can I change to make it work. Thanks a lot.

    Read the article

< Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >