Search Results

Search found 31891 results on 1276 pages for 'database schema'.

Page 720/1276 | < Previous Page | 716 717 718 719 720 721 722 723 724 725 726 727  | Next Page >

  • Why does a change of Session State provider lead to an ASPx page yielding garbage?

    - by Rory Becker
    I have an aspnet webapp which has worked very well up until now. I was recently asked to explore ways of making it scale better. I found that seperation of database and Webapp would help. Further I was told that if I changed my session providing mechanism to SQLServer, I would be able to duplicate the Web Stack to several machines which could each call back to the state server allowing the load to be distirbuted better. This sounds logical. So I created an ASPState database using ASPNet_RegSQL.exe as detailed in many locations across the web and changed the web.config on my app from: <sessionState mode="InProc" cookieless="false" timeout="20" /> To: <sessionState mode="SQLServer" sqlConnectionString="Server=SomeSQLServer;user=SomeUser;password=SomePassword" cookieless="false" timeout="20" /> Then I addressed my app, which presented me with its logon screen and I duly logged in. Once in I was presented, not with the page I was expecting, but with: I can change the sessionstate back and forth. This problem goes away and then comes back based on which set of configuration I use. Why is this happening?

    Read the article

  • Bidirectional replication update record problem

    - by Mirek
    Hi, I would like to present you my problem related to SQL Server 2005 bidirectional replication. What do I need? My teamleader wants to solve one of our problems using bidirectional replication between two databases, each used by different application. One application creates records in table A, changes should replicate to second database into a copy of table A. When data on second server are changed, then those changes have to be propagated back to the first server. I am trying to achieve bidirectional transactional replication between two databases on one server, which is running SQL Server 2005. I have manage to set this up using scripts, established 2 publications and 2 read only subscriptions with loopback detection. Distributtion database is created, publishment on both databases is enabled. Distributor and publisher are up. We are using some rules to control, which records will be replicated, so we need to call our custom stored procedures during replication. So, articles are set to use update, insert and delete custom stored procedures. So far so good, but? Everything works fine, changes are replicating, until updates are done on both tables simultaneously or before changes are replicated (and that takes about 3-6 seconds). Both records then end up with different values. UPDATE db1.dbo.TestTable SET Col = 4 WHERE ID = 1 UPDATE db2.dbo.TestTable SET Col = 5 WHERE ID = 1 results to: db1.dbo.TestTable COL = 5 db2.dbo.TestTable COL = 4 But we want to have last change winning replication. Please, is there a way to solve my problem? How can I ensure same values in both records? Or is there easier solution than this kind of replication? I can provide sample replication script which I am using. I am looking forward for you ideas, Mirek

    Read the article

  • Is it a jaxb bug?

    - by wd-shuang
    I take a scheme, its element definition as follows: <xs:complexType name="OriginalMessageContents1"> <xs:sequence> <xs:any namespace="##any" processContents="skip"/> <xs:any namespace="##any" processContents="skip" minOccurs="0"/> </xs:sequence> </xs:complexType> I use xjb to export java file,xjb as follow: <jxb:bindings version="2.0" xmlns:jxb="http://java.sun.com/xml/ns/jaxb" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <jxb:bindings schemaLocation="ibps.706.001.01.xsd" node="/xs:schema"> <jxb:bindings node="//xs:complexType[@name='OriginalMessageContents1']/xs:sequence"> <jxb:bindings node=".//xs:any[position()=1]"> <jxb:property name="anyOne"/> </jxb:bindings> <jxb:bindings node=".//xs:any[position()=2]"> <jxb:property name="anyTwo"/> </jxb:bindings> </jxb:bindings> </jxb:bindings> </jxb:bindings> Java as: @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "OriginalMessageContents1", propOrder = { "anyOne", "anyTwo" }) public class OriginalMessageContents1 { @XmlAnyElement protected Element anyOne; @XmlAnyElement protected Element anyTwo; public Element getAnyOne() { return anyOne; } public void setAnyOne(Element value) { this.anyOne = value; } public Element getAnyTwo() { return anyTwo; } public void setAnyTwo(Element value) { this.anyTwo = value; } } When I unmashaller this object by using getAnyOne,I get the contents of the second element. I got null by using getAnyTwo. It is a bug? Anybody can help me ?

    Read the article

  • proper use of volatile keyword

    - by luke
    I think i have a pretty good idea about the volatile keyword in java, but i'm thinking about re-factoring some code and i thought it would be a good idea to use it. i have a class that is basically working as a DB Cache. it holds a bunch of objects that it has read from a database, serves requests for those objects, and then occasionally refreshes the database (based on a timeout). Heres the skeleton public class Cache { private HashMap mappings =....; private long last_update_time; private void loadMappingsFromDB() { //.... } private void checkLoad() { if(System.currentTimeMillis() - last_update_time > TIMEOUT) loadMappingsFromDB(); } public Data get(ID id) { checkLoad(); //.. look it up } } So the concern is that loadMappingsFromDB could be a high latency operation and thats not acceptable, So initially i thought that i could spin up a thread on cache startup and then just have it sleep and then update the cache in the background. But then i would need to synchronize my class (or the map). and then i would just be trading an occasional big pause for making every cache access slower. Then i thought why not use volatile i could define the map reference as volatile private volatile HashMap mappings =....; and then in get (or anywhere else that uses the mappings variable) i would just make a local copy of the reference: public Data get(ID id) { HashMap local = mappings; //.. look it up using local } and then the background thread would just load into a temp table and then swap the references in the class HashMap tmp; //load tmp from DB mappings = tmp;//swap variables forcing write barrier Does this approach make sense? and is it actually thread-safe?

    Read the article

  • Why does Hibernate ignore the JPA2 standardized properties in my persistence.xml?

    - by Ophidian
    I have an extremely simple web application running in Tomcat using Spring 3.0.1, Hibernate 3.5.1, JPA 2, and Derby. I am defining all of my database connectivity in persistence.xml and merely using Spring for dependency injection. I am using embedded Derby as my database. Everything works correctly when I define the driver and url properties in persistence.xml in the classic Hibernate manner as thus: <property name="hibernate.connection.driver_class" value="org.apache.derby.jdbc.EmbeddedDriver"/> <property name="hibernate.connection.url" value="jdbc:derby:webdb;create=true"/> The problems occur when I switch my configuration to the JPA2 standardized properties as thus: <property name="javax.persistence.jdbc.driver" value="org.apache.derby.jdbc.EmbeddedDriver"/> <property name="javax.persistence.jdbc.url" value="jdbc:derby:webdb;create=true"/> When using the JPA2 property keys, the application bails hard with the following exception: java.lang.UnsupportedOperationException: The user must supply a JDBC connection Does anyone know why this is failing? NOTE: I have copied the javax... property strings straight from the Hibernate reference documentation, so a typo is extremely unlikely.

    Read the article

  • SQLAlchemy: select over multiple tables

    - by ahojnnes
    Hi, I wanted to optimize my database query: link_list = select( columns=[link_table.c.rating, link_table.c.url, link_table.c.donations_in], whereclause=and_( not_(link_table.c.id.in_( select( columns=[request_table.c.recipient], whereclause=request_table.c.donator==donator.id ).as_scalar() )), link_table.c.id!=donator.id, ), limit=20, ).execute().fetchall() and tried to merge those two selects in one query: link_list = select( columns=[link_table.c.rating, link_table.c.url, link_table.c.donations_in], whereclause=and_( link_table.c.active==True, link_table.c.id!=donator.id, request_table.c.donator==donator.id, link_table.c.id!=request_table.c.recipient, ), limit=20, order_by=[link_table.c.rating.desc()] ).execute().fetchall() the database-schema looks like: link_table = Table('links', metadata, Column('id', Integer, primary_key=True, autoincrement=True), Column('url', Unicode(250), index=True, unique=True), Column('registration_date', DateTime), Column('donations_in', Integer), Column('active', Boolean), ) request_table = Table('requests', metadata, Column('id', Integer, primary_key=True, autoincrement=True), Column('recipient', Integer, ForeignKey('links.id')), Column('donator', Integer, ForeignKey('links.id')), Column('date', DateTime), ) There are several links (donator) in request_table pointing to one link in the link_table. I want to have links from link_table, which are not yet "requested". But this does not work. Is it actually possible, what I'm trying to do? If so, how would you do that? Thank you very much in advance!

    Read the article

  • Undefined method `add' on a cucumber step that usually works.

    - by Josiah Kiehl
    I have a path defined: when /the admin home\s?page/ "/admin/" I have scenario that is passing: Scenario: Let admins see the admin homepage Given "pojo" is logged in And "pojo" is an "admin" And I am on the admin home page Then I should see "Hi there." And I have a scenario that is failing: Scenario: Review flagged photo Given "pojo" is logged in And "pojo" is an "admin" ...bunch of steps that create stuff in the database... And I am on the admin home page Then ... the rest of the steps The step that fails in the second one is "And I am on the admin home page" which passes just fine in the first scenario. Here's the error I get: And I am on the admin home page # features/step_definitions/web_steps.rb:18 undefined method `add' for {}:Hash (NoMethodError) ./app/controllers/admin_controller.rb:13:in `index' ./app/controllers/admin_controller.rb:11:in `each' ./app/controllers/admin_controller.rb:11:in `index' /usr/lib/ruby/1.8/benchmark.rb:308:in `realtime' ./features/step_definitions/web_steps.rb:19:in `/^(?:|I )am on (.+)$/' features/admin.feature:52:in `And I am on the admin home page' This is very odd... why would it be fine in the first case, and not in the second where the only difference are a bunch of steps that create records in the db? [edit] Here's the add stuff to database step: Given /^there is a "([^\"]*)" with the following:$/ do |model, table| model.constantize.create!(table.rows_hash) end

    Read the article

  • django class with an array of "parent" foreignkeys issue

    - by user298032
    Let's say I have a class called Fruit with child classes of the different kinds of Fruit with their own specific attributes, and I want to collect them in a FruitBasket: class Fruit(models.Model):     type = models.CharField(max_length=120,default='banana',choices=FRUIT_TYPES)     ... class Banana(Fruit):     """banana (fruit type)"""     length = models.IntegerField(blank=True, null=True)     ... class Orange(Fruit):     """orange (fruit type)"""     diameter = models.IntegerField(blank=True, null=True)     ... class FruitBasket(models.Model):     fruits = models.ManyToManyField(Fruit)     ... The problem I seem to be having is when I retrieve and inspect the Fruits in a FruitBasket, I only retrieve the Fruit base class and can't get at the Fruit child class attributes. I think I understand what is happening--when the array is retrieved from the database, the only fields that are retrieved are the Fruit base class fields. But is there some way to get the child class attributes as well without multiple expensive database transactions? (For example, I could get the array, then retrieve the child Fruit classes by the id of each array element). thanks in advance, Chuck

    Read the article

  • How to cache pages using background jobs ?

    - by Alexandre
    Definitions: resource = collection of database records, regeneration = processing these records and outputting the corresponding html Current flow: Receive client request Check for resource in cache If not in cache or cache expired, regenerate Return result The problem is that the regeneration step can tie up a single server process for 10-15 seconds. If a couple of users request the same resource, that could result in a couple of processes regenerating the exact same resource simultaneously, each taking up 10-15 seconds. Wouldn't it be preferrable to have the frontend signal some background process saying "Hey, regenerate this resource for me". But then what would it display to the user? "Rebuilding" is not acceptable. All resources would have to be in cache ahead of time. This could be a problem as the database would almost be duplicated on the filesystem (too big to fit in memory). Is there a way to avoid this? Not ideal, but it seems like the only way out. But then there's one more problem. How to keep the same two processes from requesting the regeneration of a resource at the same time? The background process could be regenerating the resource when a frontend asks for the regeneration of the same resource. I'm using PHP and the Zend Framework just in case someone wants to offer a platform-specific solution. Not that it matters though - I think this problem applies to any language/framework. Thanks!

    Read the article

  • General JDBC Setup

    - by AeroDroid
    So I have a MySQL database set up on a Debian server and it works fine from a phpMyAdmin client. I'm currently working on a project to write a Java server that would be able to use the MySQL database that is already on this server through a JDBC connection. I've looked at many tutorials and documentations but all of them seem to just explain how to do client-side code, but I have yet to figure out how to even successfully open a JDBC connection to the server. As far as I am concerned, I believe that program has the drivers properly set up because it's not crashing anymore (I simply direct the Java Build Path of my program to the Connector/J provided by MySQL). As far as my program goes, this is what it looks like... import java.sql.*; public class JDBCTest { public static void main(String[] args) { System.out.println("Started!"); try { DriverManager.registerDriver(new com.mysql.jdbc.Driver()); System.out.println("Driver registered. Connecting..."); Connection conn = DriverManager.getConnection("jdbc:mysql://localhost/", "root", "password"); System.out.println("Connected!"); conn.close(); } catch (SQLException e) { System.out.println("Error!"); e.printStackTrace(); } } } This is what's printed... Started! Driver registered. Connecting... It's as if the DriverManager.getConnection(String) just freezes there. I'm sure this is a problem with the server because when I intentionally misspell localhost, or an IP address, the program crashes within 20 seconds. This just hangs there forever. Sorry about this wall of text, but my final question is if anyone has any information what I should do or install on the server to get this to work? Thank you so much!

    Read the article

  • Images from SQL Server JPG/PNG Image Column not being Type Converted to Bitmap in HttpHandlers (cons

    - by kanchirk
    Our Silverlight 3.0 Client consumes Images stored/retrieved on the File System thorough ASP.NET HttpHandlers successfully. We are trying to store and read back Images using a SQL Server 2008 Database. Please find the stripped down code pasted below with the Exception. "Bitmap is not Valid" //Store document to the database private void SaveImageToDatabaseKK(HttpContext context, string pImageFileName) { try { //ADO.NET Entity Framework ImageTable documentDB = new ImageTable(); int intLength = Convert.ToInt32(context.Request.InputStream.Length); //Move the file contents into the Byte array Byte[] arrContent = new Byte[intLength]; context.Request.InputStream.Read(arrContent, 0, intLength); //Insert record into the Document table documentDB.InsertDocument(pImageFileName, arrContent, intLength); } catch { } } =The method to Read Back the Row from the Table and Send it back is below.= private void RetrieveImageFromDatabaseTableKK(HttpContext context, string pImageName) { try { ImageTable documentDB = new ImageTable(); var docRow = documentDB.GetDocument(pImageName); //based on Imagename which is unique //DocData column in table is **Image** if (docRow!=null && docRow.DocData != null && docRow.DocData.Length > 0) { Byte[] bytImage = docRow.DocData; if (bytImage != null && bytImage.Length > 0) { Bitmap newBmp = ConvertToBitmap(context, bytImage ); if (newBmp != null) { newBmp.Save(context.Response.OutputStream, ImageFormat.Jpeg); newBmp.Dispose(); } } } } catch (Exception exRI) { } } // Convert byte array to Bitmap (byte[] to Bitmap) protected Bitmap ConvertToBitmap(byte[] bmp) { if (bmp != null) { try { TypeConverter tc = TypeDescriptor.GetConverter(typeof(Bitmap)); Bitmap b = (Bitmap)tc.ConvertFrom(bmp); **//This is where the Exception Occurs.** return b; } catch (Exception) { } } return null; }

    Read the article

  • How do I create a dynamic data transfer object dynamically from ADO.net model

    - by Richard
    I have a pretty simple database with 5 tables, PK's and relationships setup, etc. I also have an ASP.net MVC3 project I'm using to create simple web services to feed JSON/XML to a mobile app using post/get. To access my data I'm using an ADO.net entity model class to handle generation of the entities, etc. Due to issues with serialization/circular references created by the auto-generated relations from ADO.net entity model, I've been forced to create "Data transfer objects" to strip out the relations and data that doesn't need to be transferred. Question 1: is there an easier way to create DTOs using the entity framework itself? IE, specify only the entity properties I want to convert to Jsonresults? I don't wish to use any 3rd party frameworks if I can help it. Question 2: A side question for Entity Framework, say I create an ADO.net entity model in one project within a solution. Because that model relies on the connection to the database specified in project A, can project B somehow use that model with a similar connection? Both projects are in the same solution. Thanks!

    Read the article

  • WebService Stubs WSDL Axis2

    - by tt0686
    Good afternoon in my timezone. I have a wsdl file with the following snippet of code: <schema targetNamespace="http://util.cgd.pt" xmlns="http://www.w3.org/2001/XMLSchema"> <import namespace="http://schemas.xmlsoap.org/soap/encoding/" /> <complexType name="Attribute"> <sequence> <element name="fieldType" nillable="true" type="xsd:string" /> <element name="dataType" nillable="true" type="xsd:string" /> <element name="name" nillable="true" type="xsd:string" /> <element name="value" nillable="true" type="xsd:string" /> </sequence> </complexType> <complexType name="ArrayOffAttribute"> <complexContent> <restriction base="soapenc:Array"> <attribute ref="soapenc:arrayType" wsdl:arrayType="tns3:Attribute[]" /> </restriction> </complexContent> </complexType> When i use jax-rpc or Axis1 to generate the stubs the type Attribute is generated, but when i use Axis2 the type Attribute is not generated and a new type is created ArrayOffAttribute, this new type extends the axis2.databinding.types.soapencoding.Array and permits to add elements through the array.addObject(object), my question is, i am migrating one Java EE application from webservices using jax-rpc to start using Axis2, and the methods are using the Attribute type to fullfill attributes fields, now in Axis2 and do not have attribute type , what should i use in the ArrayOffAttribute.addObject(?) ? Could be something wrong with Axis2 ? i am stop here :( Thanks in advance Best regards

    Read the article

  • Why are my Fluent NHibernate SubClass Mappings generating redundant columns?

    - by Brook
    I'm using Fluent NHibernate 1.x build 694, built against NH 3.0 I have the following entities public abstract class Card { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual string Description { get; set; } public virtual Product Product { get; set; } public virtual Sprint Sprint { get; set; } } public class Story:Card { public virtual double Points { get; set; } public virtual int Priority { get; set; } public virtual IList<Task> Tasks { get; set; } } And the following mappings public class CardMap:ClassMap<Card> { public CardMap() { Id(c => c.Id) .Index("Card_Id"); Map(c => c.Name) .Length(50) .Not.Nullable(); Map(c => c.Description) .Length(1024) .Not.Nullable(); References(c=>c.Product) .Not.Nullable(); References(c=>c.Sprint) .Nullable(); } } public class StoryMap : SubclassMap<Story> { public StoryMap() { Map(s => s.Points); Map(s => s.Priority); HasMany(s => s.Tasks); } } When I generate my Schema, the tables are created as follows Card --------- Id Name Description Product_id Sprint_id Story ------------ Card_id Points Priority Product_id Sprint_id What I would have expected would have been to see the columns Product_id and Sprint_id ONLY in the Card table, not the Story table. What am I doing wrong or misunderstanding?

    Read the article

  • How does real world login process happen in web application in Java?

    - by Nitesh Panchal
    Hello, I am very much confused regarding login process that happen in Java web application. I read many tutorials regarding jdbcRealm and JAAS. But, one thing that i don't understand is that why should i use them ? Can't i simply check directly against my database of users? and once they successfully login to the site, i store some variable in session as a flag. And probably check that session variable on all restricted pages (I mean keep a filter for restricted resources url pattern).If the flag doesn't exist simply redirect the user to login page. Is this approach correct?Does this approch sound correct? If yes, then why did all this JAAS and jdbcRealm came into existence? Secondly, I am trying to completely implement SAS(Software as service) in my web application, meaning everything is done through web services.If i use webservices, is it possible to use jdbcRealm? If not, then is it possible to use JAAS? If yes, then please show me some example which uses mySql as a database and then authenticates and authorizes. I even heard about Spring Security. But, i am confused about that too in the sense that how do i use webservice with Spring Security. Please help me. I am really very confused. I read sun's tutorials but they only keep talking about theories. For programmers to understand a simple concept, they show a 100 page theory first before they finally come to one example.

    Read the article

  • JSON Select option question?

    - by user303832
    Hello,I wrote somethinh like this, `$sql = "SELECT * FROM dbase"; $strOptions = ""; if (!$q=mysql_query($sql)) { $strOptions = "<option>There was an error connecting to database</option>"; } if (mysql_num_rows($q)==0) { $strOptions = "<option>There are no news in database</option>"; }else { $a=0; while ($redak=mysql_fetch_assoc($q)) { $a=$a+1; $vvvv[$a]["id"]=$redak["id"]; $vvvv[$a]["ssss"]=$redak["ssss"]; } } for ($i=1; $i<=$a; $i++) { $strOptions = $strOptions. '<option value="'. $vvvv[$i]["id"] .'">'.$i.'.) - '.strip_tags($vvvv[$i]["ssss"]).'</option>'; } echo '[{ "message": "3" },{ "message": "' . count($wpdb->get_results("SELECT * FROM dbase")) . '" },{ "message": "'.$strOptions .'"}]';` I just cannot later parse json file,later I parse it on this way to fill select-option $jq("#select-my").children().remove(); $jq("#select-my").append(data[2].message); I use jquery form pluing,everything work fine,except this,I cant parse data for select-option element.I try and with json_encode in php.Can someone help please?

    Read the article

  • g++ problem with -l option and PostgreSQL

    - by difek
    Hi I've written simple program. Here a code: #include <iostream> #include <stdio.h> #include <D:\Program Files\PostgreSQL\8.4\include\libpq-fe.h> #include <string> using namespace std; int main() { PGconn *conn; PGresult *res; int rec_count; int row; int col; cout << "ble ble: " << 8 << endl; conn = PQconnectdb("dbname=db_pm host=localhost user=postgres password=postgres"); if (PQstatus(conn) == CONNECTION_BAD) { puts("We were unable to connect to the database"); exit(0); } } I'm trying to connect with PostgreSQL. I compile this code with following command: gcc -I/"d:\Program Files\PostgreSQL\" -L/"d:\Program Files\PostgreSQL\8.4\lib\" -lpq -o firstcpp.o firstcpp.cpp This command is from following site: http://www.mkyong.com/database/how-to-building-postgresql-libpq-programs/ And when I compile it I get following error: /cygnus/cygwin-b20/H-i586-cygwin32/i586-cygwin32/bin/ld: cannot open -lpq: No such file or directory collect2: ld returned 1 exit status Does anyone help me? Difek

    Read the article

  • Deploy GWT Application to Google App Engine using NetBeans

    - by Yan Cheng CHEOK
    Hello, I try to deploy a GWT application, to Google App Engine using NetBeans. I had successful run GWT sample http://code.google.com/webtoolkit/doc/latest/tutorial/create.html using Personal GlassFish v3 Prelude Domain, by 1) Copy generated source code from StockWatcher to C:\Projects\StockWatcherNetbeans\src\java\com\google\ 2) Modify C:\Projects\StockWatcherNetbeans\nbproject\gwt.properties gwt.module=com.google.gwt.stockwatcher.StockWatcher 3) Select Personal GlassFish v3 Prelude Domain, and run. All works fine! Now, I try to select Google App Engine server, and run. However, I get the error "There is no appengine web project opened!" I check... There is file called C:\Projects\StockWatcherNetbeans\war\WEB-INF\appengine-web.xml with content <?xml version="1.0" encoding="UTF-8"?> <appengine-web-app xmlns="http://appengine.google.com/ns/1.0" xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://kenai.com/projects/nbappengine/downloads/download/schema/appengine-web.xsd appengine-web.xsd'> <application>StockWatcherNetbeans</application> <version>1</version> </appengine-web-app> I am using NetBeans 6.7.1 GWT4NB (GWT Plugin for NetBeans) 2.6.12 Google App Engine plugin for NetBeans from http://kenai.com/downloads/nbappengine/1.0_NetBeans671/updates.xml Anything I had missed out? Even when I right click to the project, the Deploy to Google App Engine options is disabled. And yes, please do not ask me why not use Eclipse.

    Read the article

  • Running migration on server when deploying with capistrano

    - by Pandafox
    Hi, I'm trying to deploy my rails application with capistrano, but I'm having some trouble running my migrations. In my development environment I just use sqlite as my database, but on my production server I use MySQL. The problem is that I want the migrations to run from my server and not my local machine, as I am not able to connect to my database from a remote location. My server setup: A debian box running ngnix, passenger, mysql and a git repository. What is the easiest way to do this? update: Here's my deploy script: set :application, "example.com" set :domain, "example.com" set :scm, :git set :repository, "[email protected]:project.git" set :use_sudo, false set :deploy_to, "/var/www/example.com" role :web, domain role :app, domain role :db, "localhost", :primary = true after "deploy", "deploy:migrate" When I run cap deploy, everything is working fine until it tries to run the migration. Here's the error I'm getting: ** [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2)) connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2))) This is why I need to run the migration from the server and not from my local machine. Any ideas?

    Read the article

  • How much does Website Development cost nowadays?

    - by Andreas Grech
    I am thinking of setting up my own freelance business but coming from a workplace that offers a particular service to huge clients, I do not know what are the current charges for websites are nowadays. I know that as technology just keeps changing and changing (most of the time, for the better...), the amount you charge for a single website is constantly differing. Like for example, I don't think static websites (with just static html pages) are that expensive today, no? (as i said, I might be mistaken since I haven't really touched on this freelance industry yet) So, freelance web-developers out there, can you give me estimates on how much you charge for your clients? Some examples of websites that I want to know an approx charge: ~10 static html pages ~10 dhtml pages (with maybe a flasy menu on the top/side) Database driven websites with a standard CMS (be it the one you developed, or an existing one) Database driven but with a custom-built cms for the particular client Using an existing template for a design Starting the design from scratch etc... I know that the normally clients don't really care about the technologies used to construct their websites, but do you charge differently according to which technology you use to build the website with?; as in, is the technology a factor when setting the price? ...being ASP.Net, PHP, Ruby On Rails etc... Also, how do you go on about charging your clients for your services? What are the major factors that you consider when setting a price tag for a website to a client ? And better yet, how do you even find prospective clients? <= [or should I leave this question for a different post?] Btw, in your post, also mention some numbers (in cash values, be it in USD, GBP, EUR or anything) because I want to be able to take calculate some averages from this post when some answers stack up

    Read the article

  • does sfWidgetFormSelect provide a string or an int of the selected item?

    - by han
    Hey guys, I'm having an annoying problem. I'm trying to find out what fields of a form were changed, and then insert that into a table. I managed to var_dump in doUpdateObjectas shown in the following public function doUpdateObject($values) { parent::doUpdateObject($values); var_dump($this->getObject()->getModified(false)); var_dump($this->getObject()->getModified(true)); } And it seems like $this-getObject()-getModified seems to work in giving me both before and after values by setting it to either true or false. The problem that I'm facing right now is that, some how, sfWidgetFormSelect seems to be saving one of my fields as a string. before saving, that exact same field was an int. (I got this idea by var_dump both before and after). Here is what the results on both var dumps showed: array(1) {["annoying_field"]=> int(3)} array(1) {["annoying_field"]=>string(1)"3"} This seems to cause doctrine to think that this is a modification and thus gives a false positive. In my base form, I have under $this->getWidgets() 'annoying_field' => new sfWidgetFormInputText(), under $this->setValidators 'annoying_field' => new sfValidatorInteger(array('required' => false)), and lastly in my configured Form.class.php I have reconfigured the file as such: $this->widgetSchema['annoying_field'] = new sfWidgetFormSelect(array('choices' => $statuses)); statuses is an array containing values like {""a", "b", "c", "d"} and I just want the index of the status to be stored in the database. And also how can I insert the changes into another database table? let's say my Log table? Any ideas and advice as to why this is happen is appreciated, I've been trying to figure it out and browsing google for various keywords with no avail. Thanks! Edit: ok so I created another field, integer in my schema just for testing. I created an entry, saved it, and edited it. this time the same thing happened!

    Read the article

  • First site going live real soon. Last minute questions

    - by user156814
    I am really close to finishing up on a project that I've been working on. I have done websites before, but never on my own and never a site that involved user generated data. I have been reading up on things that should be considered before you go live and I have some questions. 1) Staging... (Deploying updates without affecting users). I'm not really sure what this would entail, since I'm sure that any type of update would affect users in some way. Does this mean some type of temporary downtime for every update? can somebody please explain this and a solution to this as well. 2) Limits... I'm using the Kohana framework and I'm using the Auth module for logging users in. I was wondering if this already has some type of limit (on login attempts) built in, and if not, what would be the best way to implement this. (save attempts in database, cookie, etc.). If this is not whats meant by limits, can somebody elaborate. 3) Caching... Like I said, this is my first site built around user content. Considering that, should I cache it? 4) Back Ups... How often should I backup my (MySQL) database, and how should I back it up (MySQL export?). The site is currently up, yet not finished, if anybody wants to look at it and see if something pops out to you that should be looked at/fixed. Clashing Thoughts. If there is anything else I overlooked, thats not already in the list linked to above, please let me know. Thanks.

    Read the article

  • Can I add a condition to CakePHP's update statement?

    - by Don Kirkby
    Since there doesn't seem to be any support for optimistic locking in CakePHP, I'm taking a stab at building a behaviour that implements it. After a little research into behaviours, I think I could run a query in the beforeSave event to check that the version field hasn't changed. However, I'd rather implement the check by changing the update statement's WHERE clause from WHERE id = ? to WHERE id = ? and version = ? This way I don't have to worry about other requests changing the database record between the time I read the version and the time I execute the update. It also means I can do one database call instead of two. I can see that the DboSource.update() method supports conditions, but Model.save() never passes any conditions to it. It seems like I have a couple of options: Do the check in beforeSave() and live with the fact that it's not bulletproof. Hack my local copy of CakePHP to check for a conditions key in the options array of Model.save() and pass it along to the DboSource.update() method. Right now, I'm leaning in favour of the second option, but that means I can't share my behaviour with other users unless they apply my hack to their framework. Have I missed an easier option?

    Read the article

  • Which to use, XMP or RDF?

    - by zotty
    What's the difference between RDF and XMP? From what I can tell, XMP is derived from RDF... so what does it offer that RDF doesn't? My particular situation is this: I've got some images which need tagging with details of how an experiment was performed, and what sort of data analysis has been performed on the images. A colleague of mine is pushing for XMP, but he's thinking of the images as photos - they're not really, they're just bits of data. From what I've seen (mainly by opening images in notepad++) the XMP data looks very similar to RDF - even so far as using RDF in the tag names (e.g. <rdf:Seq>). I'd like this data to be usable by other people who use similar instruments for similar experiments, so creating a mini standard (schema?) seems like the way to go. Apologies for the lack of fundemental understanding - I'm a Doctor, not a programmer! If it makes any difference, the language of choice will be C#. Edit for more information: First off, thanks for the excellent replies - thinking of XMP as a vocabulary for RDF makes things a lot clearer. The sort of data I'll be storing wont be avaliable in any of the pre-defined sets. It'll detail experimental set ups, locations and results. I think using RDF is the way to go.

    Read the article

  • ActiveRecord/sqlite3 column type lost in table view?

    - by duncan
    I have the following ActiveRecord testcase that mimics my problem. I have a People table with one attribute being a date. I create a view over that table adding one column which is just that date plus 20 minutes: #!/usr/bin/env ruby %w|pp rubygems active_record irb active_support date|.each {|lib| require lib} ActiveRecord::Base.establish_connection( :adapter => "sqlite3", :database => "test.db" ) ActiveRecord::Schema.define do create_table :people, :force => true do |t| t.column :name, :string t.column :born_at, :datetime end execute "create view clowns as select p.name, p.born_at, datetime(p.born_at, '+' || '20' || ' minutes') as twenty_after_born_at from people p;" end class Person < ActiveRecord::Base validates_presence_of :name end class Clown < ActiveRecord::Base end Person.create(:name => "John", :born_at => DateTime.now) pp Person.all.first.born_at.class pp Clown.all.first.born_at.class pp Clown.all.first.twenty_after_born_at.class The problem is, the output is Time Time String When I expect the new datetime attribute of the view to be also a Time or DateTime in the ruby world. Any ideas? I also tried: create view clowns as select p.name, p.born_at, CAST(datetime(p.born_at, '+' || '20' || ' minutes') as datetime) as twenty_after_born_at from people p; With the same result.

    Read the article

< Previous Page | 716 717 718 719 720 721 722 723 724 725 726 727  | Next Page >