Search Results

Search found 70568 results on 2823 pages for 'ef database first'.

Page 341/2823 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • PHP code displayed in browser

    - by Drake
    so, I'm working on a databases project, and i'm trying to code incrementally. the problem is, when i go to test the php in browser, it displays the php code after my use of "-". the html printing is displayed properly, which is AFTER the point where the - is. here is the php: <?php function getGraphicNovel(){ include_once("./connect.php"); $db_connection = new mysqli($SERVER, $USERNAME, $PASSWORD, $DATABASE); if (mysqli_connect_errno()) { echo("Can't connect to MySQL Server. Error code: " . mysqli_connect_error()); return null; } $stmt = $db_connection->stmt_init(); $returnValue = "invalid"; if($stmt.prepare("select series from graphic_novel_main natural join graphic_novel_misc")) { $stmt->execute(); $stmt->bind_result($series); while ($stmt->fetch()) { echo "<tr><td>" . $series . "</td></tr>"; } $stmt->close(); } $db_connection->close(); } getGraphicNovel(); ?> here is a link to the page. hopefully it works for people outside the school's network. http://plato.cs.virginia.edu/~paw5k/cainedb/viewall.html if anyone knows why this is happening, your input would be great!

    Read the article

  • How does one implement storage/retrieval of smart-search/mailbox features?

    - by humble_coder
    Hi All, I have a question regarding implementation of smart-search features. For example, consider something like "smart mailboxes" in various email applications. Let's assume you have your data (emails) stored in a database and, depending on the field for which the query will be created, you present different options to the end user. At the moment let's assume the Subject, Verb, Object approach… For instance, say you have the following: SUBJECTs: message, to_address, from_address, subject, date_received VERBs: contains, does_not_contain, is_equal_to, greater_than, less_than OBJECTs: ??????? Now, in case it isn't clear, I want a table structure (although I'm not opposed to an external XMLesque file of some sort) to store (and later retrieve/present) my criteria for smart searches/mailboxes for later use. As an example, using SVO I could easily store then reconstruct a query for "date between two dates" -- simply use "date greater than" AND "date less than". However, what if, in the same smart search, I wanted a "between" OR'ed with another criterion? You can see that it might get out of hand -- not necessarily in the query creation (as that is rather simplistic), but in the option presentation and storage mechanism. Perhaps I need to think more on a more granular level. Perhaps I need to simply allow the user to select AND or OR for each entry independently instead of making it an ALL OR NOTHING type smart search (i.e. instead of MATCH ALL or MATCH ANY, I need to simply allow them to select -- I just don't want it to turn into a Hydra). Any input would be most appreciated. My apologies if the question is a bit incoherent. It is late, and I my brain is toast. Best.

    Read the article

  • How do I connect to SQL Server with VB?

    - by Wayne Werner
    Hi, I'm trying to connect to a SQL server from VB. The SQL server is across the network uses my windows login for authentication. I can access the server using the following python code: import odbc conn = odbc.odbc('SignInspection') c = conn.cursor() c.execute("SELECT * FROM list_domain") c.fetchone() This code works fine, returning the first result of the SELECT. However, I've been trying to use the SqlClient.SqlConnection in VB, and it fails to connect. I've tried several different connection strings but this is the current code: Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim conn As New SqlClient.SqlConnection conn.ConnectionString = "data source=signinspection;initial catalog=signinspection;integrated security=SSPI" Try conn.Open() MessageBox.Show("Sweet Success") 'Insert some code here, woo Catch ex As Exception MessageBox.Show("Failed to connect to data source.") MessageBox.Show(ex.ToString()) Finally conn.Close() End Try End Sub It fails miserably, and it gives me an error that says "A network-related or instance-specific error occurred... (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) I'm fairly certain it's my connection string, but nothing I've found has given me any solid examples (server=mySQLServer is not a solid example) of what I need to use. Thanks! -Wayne

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • How do people handle foreign keys on clients when synchronizing to master db

    - by excsm
    Hi, I'm writing an application with offline support. i.e. browser/mobile clients sync commands to the master db every so often. I'm using uuid's on both client and server-side. When synching up to the server, the servre will return a map of local uuids (luid) to server uuids (suid). Upon receiving this map, clients updated their records suid attributes with the appropriate values. However, say a client record, e.g. a todo, has an attribute 'list_id' which holds the foreign key to the todos' list record. I use luids in foreign_keys on clients. However, when that attribute is sent over to the server, it would dirty the server db with luids rather than the suid the server is using. My current solution, is for the master server to keep a record of the mappings of luids to suids (per client id) and for each foreign key in a command, look up the suid for that particular client and use the suid instead. I'm wondering wether others have come across thus problem and if so how they have solved it? Is there a more efficient, simpler way? I took a look at this question "Synchronizing one or more databases with a master database - Foreign keys (5)" and someone seemed to suggest my current solution as one option, composite keys using suids and autoincrementing sequences and another option using -ve ids for client ids and then updating all negative ids with the suids. Both of these other options seem like a lot more work. Thanks, Saimon

    Read the article

  • Wordpress inserting comments via wp_insert_comment()

    - by Cyber Junkie
    Hello all happy holidays! :) I'm trying to insert comments in my wordpress blog via the wp_insert_comment() function. It's for a plugin I'm trying to make. I have this code in my header for testing. It works every time I refresh the page. $agent = $_SERVER['HTTP_USER_AGENT']; $data = array( 'comment_post_ID' => 256, 'comment_author' => 'Dave', 'comment_author_email' => '[email protected]', 'comment_author_url' => 'http://www.someiste.com', 'comment_content' => 'Lorem ipsum dolor sit amet...', 'comment_author_IP' => '127.3.1.1', 'comment_agent' => $agent, 'comment_date' => date('Y-m-d H:i:s'), 'comment_date_gmt' => date('Y-m-d H:i:s'), 'comment_approved' => 1, ); $comment_id = wp_insert_comment($data); It successfully inserts comments into the database. The problem: Comments don't show via the Disqus comment system. I compared table rows and I noticed that user_agent differs. Normal comments use for example, Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv... and Disqus comments use Disqus/1.1(2.61):119598902 numbers are different for each comment. Does anyone know how to insert comments with wp_insert_comment() when Disqus is enabled?

    Read the article

  • SQL Databases and table design/organization

    - by John McMullen
    (NOOB disclaimer) I'm working on a system (a type of map), that is accessed mostly via 3 fields: ID (auto incremented), X coordinate, and Y coordinate. As it is right now, i have all data on the map, stored in 1 table. Whenever the map display is loaded it simply queries the database for contents in x and y, and the DB gives the data (other fields in the same entry). If an item on the map is doing something, it has a flag saying its doing something, and then has an ID of the action in another table holding that type of 'actions'. Essentially, for all map data, its stored in 1 table. All actions of a certain type are stored in their own table. I'm a noob, and i'm wondering what the most effective/efficient structure for such a design? (a map that has items, and each item has stats/actions). I'm using PHP atm, using standard SQL queries to get my data. Should i split up the tables so that there are only x number of entries on a table? (coord range limits)? Should it just keep growing and growing? There's a lot of queries to the table... so just tryin to see what is best :/

    Read the article

  • Why is using a common-lookup table to restrict the status of entity wrong?

    - by FreshCode
    According to Five Simple Database Design Errors You Should Avoid by Anith Sen, using a common-lookup table to store the possible statuses for an entity is a common mistake. Why is this wrong? I disagree that it's wrong, citing the example of jobs at a repair service with many possible statuses that generally have a natural flow, eg.: Booked In Assigned to Technician Diagnosing problem Waiting for Client Confirmation Repaired & Ready for Pickup Repaired & Couriered Irreparable & Ready for Pickup Quote Rejected Arguably, some of these statuses can be normalised to tables like Couriered Items, Completed Jobs and Quotes (with Pending/Accepted/Rejected statuses), but that feels like unnecessary schema complication. Another common example would be order statuses that restrict the status of an order, eg: Pending Completed Shipped Cancelled Refunded The status titles and descriptions are in one place for editing and are easy to scaffold as a drop-down with a foreign key for dynamic data applications. This has worked well for me in the past. If the business rules dictate the creation of a new order status, I can just add it to OrderStatus table, without rebuilding my code.

    Read the article

  • How can I use SQL to select duplicate records, along with counts of related items?

    - by mipadi
    I know the title of this question is a bit confusing, so bear with me. :) I have a (MySQL) database with a Person record. A Person also has a slug field. Unfortunately, slug fields are not unique. There are a number of duplicate records, i.e., the records have different IDs but the same first name, last name, and slug. A Person may also have 0 or more associated articles, blog entries, and podcast episodes. If that's confusing, here's a diagram of the structure: I would like to produce a list of records that match this criteria: duplicate records (i.e., same slug field) for people who also have at least 1 article, blog entry, or podcast episode. I have a SQL query that will list all records with the same slug fields: SELECT id, first_name, last_name, slug, COUNT(slug) AS person_records FROM people_person GROUP BY slug HAVING (COUNT(slug) > 1) ORDER BY last_name, first_name, id; But this includes records for people that may not have at least 1 article, blog entry, or podcast. Can I tweak this to fit the second criteria?

    Read the article

  • Mysql partitioning: Partitions outside of date range is included

    - by Sturlum
    Hi, I have just tried to configure partitions based on date, but it seems that mysql still includes a partition with no relevant data. It will use the relevant partition but also include the oldest for some reason. Am I doing it wrong? The version is 5.1.44 (MyISAM) I first added a few partitions based on "day", which is of type "date" ALTER TABLE ptest PARTITION BY RANGE(TO_DAYS(day)) ( PARTITION p1 VALUES LESS THAN (TO_DAYS('2009-08-01')), PARTITION p2 VALUES LESS THAN (TO_DAYS('2009-11-01')), PARTITION p3 VALUES LESS THAN (TO_DAYS('2010-02-01')), PARTITION p4 VALUES LESS THAN (TO_DAYS('2010-05-01')) ); After a query, I find that it uses the "old" partition, that should not contain any relevant data. mysql> explain partitions select * from ptest where day between '2010-03-11' and '2010-03-12'; +----+-------------+------------+------------+-------+---------------+------+---------+------+------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+------------+-------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | ptest | p1,p4 | range | day | day | 3 | NULL | 79 | Using where | +----+-------------+------------+------------+-------+---------------+------+---------+------+------+-------------+ When I select a single day, it works: mysql> explain partitions select * from ptest where day = '2010-03-11'; +----+-------------+------------+------------+------+---------------+------+---------+-------+------+-------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+------------+------+---------------+------+---------+-------+------+-------+ | 1 | SIMPLE | ptest | p4 | ref | day | day | 3 | const | 39 | | +----+-------------+------------+------------+------+---------------+------+---------+-------+------+-------+

    Read the article

  • Django: DatabaseError column does not exist

    - by Rosarch
    I'm having a problem with Django 1.2.4. Here is a model: class Foo(models.Model): # ... ftw = models.CharField(blank=True) bar = models.ForeignKey(Bar) Right after flushing the database, I use the shell: Python 2.6.6 (r266:84292, Sep 15 2010, 15:52:39) [GCC 4.4.5] on linux2 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from apps.foo.models import Foo >>> Foo.objects.all() Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 67, in __repr__ data = list(self[:REPR_OUTPUT_SIZE + 1]) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 82, in __len__ self._result_cache.extend(list(self._iter)) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 271, in iterator for row in compiler.results_iter(): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/compiler.py", line 677, in results_iter for rows in self.execute_sql(MULTI): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/compiler.py", line 732, in execute_sql cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/util.py", line 15, in execute return self.cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/postgresql_psycopg2/base.py", line 44, in execute return self.cursor.execute(query, args) DatabaseError: column foo_foo.bar_id does not exist LINE 1: ...t_omg", "foo_foo"."ftw", "foo_foo... What am I doing wrong here?

    Read the article

  • why does cx_oracle execute() not like my string now?

    - by Frank Stallone
    I've downloaded cx_oracle some time ago and wrote a script to convert data to XML. I've had to reisntall my OS and grabbed the latest version of cx_Oracle (5.0.3) and all of the sudden my code is broken. The first thing was that cx_Oracle.connect wanted unicode rather string for the username and password, that was very easy to fix. But now it keeps failing on the cursor.execute and tells me my string is not a string even when type() tells me it is a string. Here is a test script I initally used ages ago and worked fine on my old version but does not work on cx_Oracle now. import cx_Oracle ip = 'url.to.oracle' port = 1521 SID = 'mysid' dsn_tns = cx_Oracle.makedsn(ip, port, SID) connection = cx_Oracle.connect(u'name', u'pass', dsn_tns) cursor = connection.cursor() cursor.arraysize = 50 sql = "select isbn, title_code from core_isbn where rownum<=20" print type(sql) cursor.execute(sql) for isbn, title_code in cursor.fetchall(): print "Values from DB:", isbn, title_code cursor.close() connection.close() When I run that I get: Traceback (most recent call last): File "C:\NetBeansProjects\Python\src\db_temp.py", line 48, in cursor.execute(sql) TypeError: expecting None or a string Does anyone know what I may be doing wrong?

    Read the article

  • Rails log shows unexpected data as to the time spent on a DB stuff

    - by Arhimed
    I'm running on WinXP + Ruby 1.8.6 + Rails 2.3.5 (frozen to the project) in development environment. Looking at development.log I observe inconsistent data as to the time spent on a database stuff. Example #1 (good): Processing PagesController#index (for 127.0.0.1 at 2010-05-11 12:15:54) [GET] Parameters: {"action"=>"index", "controller"=>"pages"} City Columns (563.0ms) SHOW FIELDS FROM `cities` City Load (15.0ms) SELECT * FROM `cities` WHERE (`cities`.`short_name` = 'NY') LIMIT 1 Redirected to http://xyz:3000/sightings Completed in 953ms (DB: 578) | 302 Found [http://xyz/] Example #2 (unexpected): Processing PagesController#index (for 127.0.0.1 at 2010-05-11 12:15:36) [GET] Parameters: {"action"=>"index", "controller"=>"pages"} City Columns (0.0ms) SHOW FIELDS FROM `cities` City Load (0.0ms) SELECT * FROM `cities` WHERE (`cities`.`short_name` = 'NY') LIMIT 1 Redirected to http://xyz:3000/sightings Completed in 47ms (DB: 32) | 302 Found [http://xyz/] Example #2 shows 32ms were spent on DB while there were just 2 sql querries and both of zero time spent. Example #3 (unexpected): Processing PagesController#index (for 127.0.0.1 at 2010-05-11 11:21:24) [GET] Parameters: {"action"=>"index", "controller"=>"pages"} City Columns (63.0ms) SHOW FIELDS FROM `cities` City Load (62.0ms) SELECT * FROM `cities` WHERE (`cities`.`short_name` = 'NY') LIMIT 1 Redirected to http://xyz:3000/sightings Completed in 1187ms (DB: 297) | 302 Found [http://xyz/] Example #3 shows 297ms while there were querries of 63ms and 62ms (125ms in total). Can't understand it. Could someone explain? Thanks in advance.

    Read the article

  • What is a good approach to preloading data?

    - by Bob Horn
    Are there best practices out there for loading data into a database, to be used with a new installation of an application? For example, for application foo to run, it needs some basic data before it can even be started. I've used a couple options in the past: TSQL for every row that needs to be preloaded: IF NOT EXISTS (SELECT * FROM Master.Site WHERE Name = @SiteName) INSERT INTO [Master].[Site] ([EnterpriseID], [Name], [LastModifiedTime], [LastModifiedUser]) VALUES (@EnterpriseId, @SiteName, GETDATE(), @LastModifiedUser) Another option is a spreadsheet. Each tab represents a table, and data is entered into the spreadsheet as we realize we need it. Then, a program can read this spreadsheet and populate the DB. There are complicating factors, including the relationships between tables. So, it's not as simple as loading tables by themselves. For example, if we create Security.Member rows, then we want to add those members to Security.Role, we need a way of maintaining that relationship. Another factor is that not all databases will be missing this data. Some locations will already have most of the data, and others (that may be new locations around the world), will start from scratch. Any ideas are appreciated.

    Read the article

  • I have created a PHP script and I am lacking to extract the primary key, I have given flow below, pl

    - by Parth
    I am using MySQL DB, working for Joomla, My requirement is tracking the activity like insert/update/delete on any table and store it in another audit table using triggers, i.e. I am doing Auditing. DB's table structure: Few tables dont have any PK nor auto increment key Flow of my script is : I fetch out all table from DB. I check whether the table have any trigger or not. If yes then it moves to check nfor next table and so on. If it does'nt find any trigger then it creates the triggers for the table, such that, -it first checks if the table has any primary key or not(for inserting in Tracking audit table for every change made) if it has the primary key then it uses it further in creation of trigger. if it doesnt find any PK then it proceeds further in creating the trigger without inserting any id in audit table Now here, My problem is I need the PK every time so that I can record the id of any particular table in which the insert/update/delete is performed, so that further i can use this audit track table to replicate in production DB.. Now as I haave mentioned earlier that I am not available with PK/auto-incremented in some table, then what should I do get the particular id in which change is done? please guide me...GEEKS!!!

    Read the article

  • Does a Postgresql dump create sequences that start with - or after - the last key?

    - by bennylope
    I recently created a SQL dump of a database behind a Django project, and after cleaning the SQL up a little bit was able to restore the DB and all of the data. The problem was the sequences were all mucked up. I tried adding a new user and generated the Python error IntegrityError: duplicate key violates unique constraint. Naturally I figured my SQL dump didn't restart the sequence. But it did: DROP SEQUENCE "auth_user_id_seq" CASCADE; CREATE SEQUENCE "auth_user_id_seq" INCREMENT 1 START 446 MAXVALUE 9223372036854775807 MINVALUE 1 CACHE 1; ALTER TABLE "auth_user_id_seq" OWNER TO "db_user"; I figured out that a repeated attempt at creating a user (or any new row in any table with existing data and such a sequence) allowed for successful object/row creation. That solved the pressing problem. But given that the last user ID in that table was 446 - the same start value in the sequence creation above - it looks like Postgresql was simply trying to start creating rows with that key. Does the SQL dump provide the wrong start key by 1? Or should I invoke some other command to start sequences after the given start ID? Keenly curious.

    Read the article

  • Mysql random rows

    - by n00b
    please read the whole question... 90% of you dont seem to do that and some of you only read the title obviously... and if you dont know the solution, dont answer - i wont have to downvote you -.-'' im entertaining the idea of getting random rows directly from mysql. what i found was SELECT * FROM tablename WHERE somefield='something' ORDER BY RAND() LIMIT 5 but even i see how slow that would be.. is the only way to do this doing something like SELECT * FROM tablename WHERE somefield='something' LIMIT RAND(aincrementvalue-5), 1 5 times? or is there a way that i with my little knowlege of databases cant come up with ? (no i dont want random indexes. i hate the idea of them...) @commenters - please first look, then think, then look again, think again and then post. i wont point fingers but i dislike stupid comments and why i think random indexes are a nasty hack ? it doesnt give you random results. it gives you x results from a random index in a predefined order its like a gapless id only in the wrong order if you fetch by 1 row and get true randomness you fall back to my method but with an additional junk field finally the reason the field exists is only to serve as a helper to something that can be done without it with almost same performance (but the quality (randomness) is better), so it is a nasty hack ;) i solved it, look @ my answer... if you think its incorrect please tell me :)

    Read the article

  • What is the fastest way to find duplicates in multiple BIG txt files?

    - by user2950750
    I am really in deep water here and I need a lifeline. I have 10 txt files. Each file has up to 100.000.000 lines of data. Each line is simply a number representing something else. Numbers go up to 9 digits. I need to (somehow) scan these 10 files and find the numbers that appear in all 10 files. And here comes the tricky part. I have to do it in less than 2 seconds. I am not a developer, so I need an explanation for dummies. I have done enough research to learn that hash tables and map reduce might be something that I can make use of. But can it really be used to make it this fast, or do I need more advanced solutions? I have also been thinking about cutting up the files into smaller files. To that 1 file with 100.000.000 lines is transformed into 100 files with 1.000.000 lines. But I do not know what is best: 10 files with 100 million lines or 1000 files with 1 million lines? When I try to open the 100 million line file, it takes forever. So I think, maybe, it is just too big to be used. But I don't know if you can write code that will scan it without opening. Speed is the most important factor in this, and I need to know if it can be done as fast as I need it, or if I have to store my data in another way, for example, in a database like mysql or something. Thank you in advance to anybody that can give some good feedback.

    Read the article

  • Prroblem with ObjectDelete() in Entity Framework 4

    - by Tom
    I got two entities: public class User : Entity { public virtual string Username { get; set; } public virtual string Email { get; set; } public virtual string PasswordHash { get; set; } public virtual List<Photo> Photos { get; set; } } and public class Photo : Entity { public virtual string FileName { get; set; } public virtual string Thumbnail { get; set; } public virtual string Description { get; set; } public virtual int UserId { get; set; } public virtual User User { get; set; } } When I try to delete the photo it works fine for the DB (record gets romoved from the table) but it doesnt for the Photos collection in the User entity (I can still see that photo in user.Photos). This is my code. What I'm doing wrong here? Changing entity properties works fine. When I change photo FileName for example it gets updated in DB and in user.Photos. var photo = SelectById(id); Context.DeleteObject(photo); Context.SaveChanges(); var user = GetUser(userName); // the photo I have just deleted is still in user.Photos also tried this but getting same results: var photo = user.Photos.First(); Context.DeleteObject(photo); Context.SaveChanges();

    Read the article

  • Draw a comparison between an integer in a specific row and a count

    - by XCoderX
    This is a follow-up question to this one: Check for specific integer in a row WHERE user = $name I want a user to be able to comment on my site for exactly five times a day. After this five times, the user has to wait 24 hours. In order to accomplish that I raise a counter in my MYSQL database, right next to the user. So where the name of the user is, there is where the counter gets raised. When it reaches 5 it should stop counting and reset after 24 hours. In order to check the time I use a timestamp. I check if the timestamp is older than 24 hours. If that is the case, the counter gets reseted (-5) and the user can comment again. In order to do that, I use the following code, however it never stops at five, my guess is that my comparison is wrong somehow: $counter = "SELECT FROM table VALUES CommentCounterReset WHERE Name = '$name'"; if(!isset($_SESSION['ts'])); { $_SESSION['ts'] = time(); } if ($counter >= 5) { if (time() - $_SESSION['ts'] <= 60*60*24){ echo "You already wrote five comments."; } else { $sql = "UPDATE table SET CommentCounterReset = CommentCounterReset-5 WHERE Name = '$name'"; } } else { $sql = "UPDATE table SET CommentCounterReset = CommentCounterReset+1 WHERE Name = '$name'"; echo "Your comment has been added."; }

    Read the article

  • Using Memcached in Python/Django - questions.

    - by Thomas
    I am starting use Memcached to make my website faster. For constant data in my database I use this: from django.core.cache import cache cache_key = 'regions' regions = cache.get(cache_key) if result is None: """Not Found in Cache""" regions = Regions.objects.all() cache.set(cache_key, regions, 2592000) #(2592000sekund = 30 dni) return regions For seldom changed data I use signals: from django.core.cache import cache from django.db.models import signals def nuke_social_network_cache(self, instance, **kwargs): cache_key = 'networks_for_%s' % (self.instance.user_id,) cache.delete(cache_key) signals.post_save.connect(nuke_social_network_cache, sender=SocialNetworkProfile) signals.post_delete.connect(nuke_social_network_cache, sender=SocialNetworkProfile) Is it correct way? I installed django-memcached-0.1.2, which show me: Memcached Server Stats Server Keys Hits Gets Hit_Rate Traffic_In Traffic_Out Usage Uptime 127.0.0.1 15 220 276 79% 83.1 KB 364.1 KB 18.4 KB 22:21:25 Can sombody explain what columns means? And last question. I have templates where I am getting much records from a few table (relationships). So in my view I get records from one table and in templates show it and related info from others. Generating page last a few seconds for very small table (<100records). Is it some easy way to cache queries from templates? Have I to do some big structure in my view (with all related tables), cache it and send to template?

    Read the article

  • Django forms I cannot save picture file

    - by dana
    i have the model: class OpenCv(models.Model): created_by = models.ForeignKey(User, blank=True) first_name = models.CharField(('first name'), max_length=30, blank=True) last_name = models.CharField(('last name'), max_length=30, blank=True) url = models.URLField(verify_exists=True) picture = models.ImageField(help_text=('Upload an image (max %s kilobytes)' %settings.MAX_PHOTO_UPLOAD_SIZE),upload_to='jakido/avatar',blank=True, null= True) bio = models.CharField(('bio'), max_length=180, blank=True) date_birth = models.DateField(blank=True,null=True) domain = models.CharField(('domain'), max_length=30, blank=True, choices = domain_choices) specialisation = models.CharField(('specialization'), max_length=30, blank=True) degree = models.CharField(('degree'), max_length=30, choices = degree_choices) year_last_degree = models.CharField(('year last degree'), max_length=30, blank=True,choices = year_last_degree_choices) lyceum = models.CharField(('lyceum'), max_length=30, blank=True) faculty = models.ForeignKey(Faculty, blank=True,null=True) references = models.CharField(('references'), max_length=30, blank=True) workplace = models.ForeignKey(Workplace, blank=True,null=True) the form: class OpencvForm(ModelForm): class Meta: model = OpenCv fields = ['first_name','last_name','url','picture','bio','domain','specialisation','degree','year_last_degree','lyceum','references'] and the view: def save_opencv(request): if request.method == 'POST': form = OpencvForm(request.POST, request.FILES) # if 'picture' in request.FILES: file = request.FILES['picture'] filename = file['filename'] fd = open('%s/%s' % (MEDIA_ROOT, filename), 'wb') fd.write(file['content']) fd.close() if form.is_valid(): new_obj = form.save(commit=False) new_obj.picture = form.cleaned_data['picture'] new_obj.created_by = request.user new_obj.save() return HttpResponseRedirect('.') else: form = OpencvForm() return render_to_response('opencv/opencv_form.html', { 'form': form, }, context_instance=RequestContext(request)) but i don't seem to save the picture in my database... something is wrong, and i can't figure out what :(

    Read the article

  • Importing Excel spreadsheet data into existing Access DB

    - by Keeb13r
    I've designed an Access 2003 DB with 3 tables: APPLICATIONS, SERVERS, and INSTALLATIONS. Records in the APPLICATIONS and SERVERS tables are uniquely identified by a synthetic primary key (in Access, an "auto number"). The INSTALLATIONS table is essentially a mapping table between APPLICATIONS and SERVERS: it's a list of records of which applications are installed on which servers. A record in the INSTALLATIONS table is also identified by a synthetic primary key, and it consists of an APPLICATION_ID and SERVER_ID for the records in their respective tables. I have an Excel 2003 spreadsheet I would like to import into this database, but it's proving difficult. The spreadsheet is made up of several tabs/worksheets, each one representing a server with its own listing of installed applications. I'm not sure how to proceed with an import - the "Get External Data -- Import" feature in Access has an import "In an Existing Table" option, but it's greyed out. I'm also unsure how I build the relationships between applications and servers for importing records into the INSTALLATIONS table. I had previously fooled around with adding some security to the Access DB file. I think I removed everything but perhaps I didn't and that's causing the problem? Some sample data from the Excel spreadsheet: SERVER101 * Adobe Reader 9 * BMC Remedy User 7.0 * HostExplorer 2008 * Microsoft Office 2003 * Microsoft Office 2007 * Notepad++ SERVER102 * Adobe Reader 9 * DameWare Mini Remote Control * Microsoft Office 2003 * Microsoft .NET Framework 3.5 SP1 * Oracle 9.2 SERVER103 * AWDView * EXTRA! Personal Client 32-bit * Microsoft Office 2003 * Microsoft .NET Framework 3.5 SP1 * Snagit 9.1 * WinZip 12.1 The Access DB design is very simple: APPLICATION * APPLICATION_ID (autonumber) * APPLICATION_NAME (varchar) SERVER * SERVER_ID (autonumber) * SERVER_NAME (varchar) INSTALLATION * INSTALLATION_ID (autonumber) * APPLICATION_ID (number) * SERVER_ID (number)

    Read the article

  • error in arabic script in mysql

    - by fusion
    i inserted data in mysql database which includes arabic script. while the output displays arabic correctly, the data in mysql looks like garbage. something like this: '&#1589;&#1614;&#1608;&#1605;&#1615; &#1579;&#1614;&#1604;&#1575;&#1579;&#1614;&#1577;&#1616; &#1571;&#1610;&#1617;&#1575;&#1605;&#1613; &#1605;&#1616;&#1606; &#1603;&#1615;&#1604;&#1617;&#1616; &#1588;&#1614;&#1607;&#1585;&#1613; &#1600; &#1571;&#1585;&#1576;&#1614;&#1593;&#1575;&#1569;&#1615; &#1576;&#1614;&#1610;&#1606;&#1614; &#1582;&#1614; should i be worried about this? if yes, how do i make it appear in proper arabic script in mysql? thanks.

    Read the article

  • #1146 - Table 'phpmyadmin.pma_recent' doesn't exist

    - by Mumin Ali
    Solution Guys... FYI i am using xampp to use phpmyadmin. and this error happens during the process of creating a database on localhost. Below is the code for config.inc file under phpmyadmin directory: <?php /* * This is needed for cookie based authentication to encrypt password in * cookie */ $cfg['blowfish_secret'] = 'xampp'; /* YOU SHOULD CHANGE THIS FOR A MORE SECURE COOKIE AUTH! */ /* * Servers configuration */ $i = 0; /* * First server */ $i++; /* Authentication type and info */ $cfg['Servers'][$i]['auth_type'] = 'HTTP'; $cfg['Servers'][$i]['user'] = 'root'; $cfg['Servers'][$i]['password'] = 'password'; $cfg['Servers'][$i]['extension'] = 'mysql'; $cfg['Servers'][$i]['AllowNoPassword'] = true; $cfg['Lang'] = ''; /* Bind to the localhost ipv4 address and tcp */ $cfg['Servers'][$i]['host'] = 'localhost'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; /* User for advanced features */ $cfg['Servers'][$i]['controluser'] = 'pma'; $cfg['Servers'][$i]['controlpass'] = ''; /* Advanced phpMyAdmin features */ $cfg['Servers'][$i]['pmadb'] = 'phpmyadmin'; $cfg['Servers'][$i]['bookmarktable'] = 'pma_bookmark'; $cfg['Servers'][$i]['relation'] = 'pma_relation'; $cfg['Servers'][$i]['table_info'] = 'pma_table_info'; $cfg['Servers'][$i]['table_coords'] = 'pma_table_coords'; $cfg['Servers'][$i]['pdf_pages'] = 'pma_pdf_pages'; $cfg['Servers'][$i]['column_info'] = 'pma_column_info'; $cfg['Servers'][$i]['history'] = 'pma_history'; $cfg['Servers'][$i]['designer_coords'] = 'pma_designer_coords'; $cfg['Servers'][$i]['tracking'] = 'pma_tracking'; $cfg['Servers'][$i]['userconfig'] = 'pma_userconfig'; $cfg['Servers'][$i]['recent'] = 'pma_recent'; $cfg['Servers'][$i]['table_uiprefs'] = 'pma_table_uiprefs'; /* * End of servers configuration */ ?>

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >