Search Results

Search found 32120 results on 1285 pages for 'django multi table inheri'.

Page 155/1285 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Need a MYSQL query to compare two tables and only output non matching results

    - by ee12csvt
    I have two tables in my database, one contains a list of items with other information on these items. The other table is contains a list of photographs of these items. The items table gives each item a unique identifier,which is used in the photographs table to identifier which item has been photographed. I need to output a list of items that are not linked to a photograph in the second table. Any ideas on how I can do this?

    Read the article

  • Performing Aggregate Functions on Multi-Million Row Tables

    - by Daniel Short
    I'm having some serious performance issues with a multi-million row table that I feel I should be able to get results from fairly quick. Here's a run down of what I have, how I'm querying it, and how long it's taking: I'm running SQL Server 2008 Standard, so Partitioning isn't currently an option I'm attempting to aggregate all views for all inventory for a specific account over the last 30 days. All views are stored in the following table: CREATE TABLE [dbo].[LogInvSearches_Daily]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [Inv_ID] [int] NOT NULL, [Site_ID] [int] NOT NULL, [LogCount] [int] NOT NULL, [LogDay] [smalldatetime] NOT NULL, CONSTRAINT [PK_LogInvSearches_Daily] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY] ) ON [PRIMARY] This table has 132,000,000 records, and is over 4 gigs. A sample of 10 rows from the table: ID Inv_ID Site_ID LogCount LogDay -------------------- ----------- ----------- ----------- ----------------------- 1 486752 48 14 2009-07-21 00:00:00 2 119314 51 16 2009-07-21 00:00:00 3 313678 48 25 2009-07-21 00:00:00 4 298863 0 1 2009-07-21 00:00:00 5 119996 0 2 2009-07-21 00:00:00 6 463777 534 7 2009-07-21 00:00:00 7 339976 503 2 2009-07-21 00:00:00 8 333501 570 4 2009-07-21 00:00:00 9 453955 0 12 2009-07-21 00:00:00 10 443291 0 4 2009-07-21 00:00:00 (10 row(s) affected) I have the following index on LogInvSearches_Daily: /****** Object: Index [IX_LogInvSearches_Daily_LogDay] Script Date: 05/12/2010 11:08:22 ******/ CREATE NONCLUSTERED INDEX [IX_LogInvSearches_Daily_LogDay] ON [dbo].[LogInvSearches_Daily] ( [LogDay] ASC ) INCLUDE ( [Inv_ID], [LogCount]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] I need to pull inventory only from the Inventory for a specific account id. I have an index on the Inventory as well. I'm using the following query to aggregate the data and give me the top 5 records. This query is currently taking 24 seconds to return the 5 rows: StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- SELECT TOP 5 Sum(LogCount) AS Views , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , Inv_ID FROM LogInvSearches_Daily D (NOLOCK) WHERE LogDay DateAdd(d, -30, getdate()) AND EXISTS( SELECT NULL FROM propertyControlCenter.dbo.Inventory (NOLOCK) WHERE Acct_ID = 18731 AND Inv_ID = D.Inv_ID ) GROUP BY Inv_ID (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |--Top(TOP EXPRESSION:((5))) |--Sequence Project(DEFINE:([Expr1007]=dense_rank)) |--Segment |--Segment |--Sort(ORDER BY:([Expr1006] DESC, [D].[Inv_ID] DESC)) |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1006]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1011], [Expr1012], [Expr1010])) | |--Compute Scalar(DEFINE:(([Expr1011],[Expr1012],[Expr1010])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1011] AND [D].[LogDay] < [Expr1012]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID]), SEEK:([propertyControlCenter].[dbo].[Inventory].[Acct_ID]=(18731) AND [propertyControlCenter].[dbo].[Inventory].[Inv_ID]=[LOA (13 row(s) affected) I tried using a CTE to pick up the rows first and aggregate them, but that didn't run any faster, and gives me essentially the same execution plan. (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --SET SHOWPLAN_TEXT ON; WITH getSearches AS ( SELECT LogCount -- , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , D.Inv_ID FROM LogInvSearches_Daily D (NOLOCK) INNER JOIN propertyControlCenter.dbo.Inventory I (NOLOCK) ON Acct_ID = 18731 AND I.Inv_ID = D.Inv_ID WHERE LogDay DateAdd(d, -30, getdate()) -- GROUP BY Inv_ID ) SELECT Sum(LogCount) AS Views, Inv_ID FROM getSearches GROUP BY Inv_ID (1 row(s) affected) StmtText ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1004]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1008], [Expr1009], [Expr1007])) | |--Compute Scalar(DEFINE:(([Expr1008],[Expr1009],[Expr1007])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1008] AND [D].[LogDay] < [Expr1009]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID] AS [I]), SEEK:([I].[Acct_ID]=(18731) AND [I].[Inv_ID]=[LOALogs].[dbo].[LogInvSearches_Daily].[Inv_ID] as [D].[Inv_ID]) ORDERED FORWARD) (8 row(s) affected) (1 row(s) affected) So given that I'm getting good Index Seeks in my execution plan, what can I do to get this running faster? Thanks, Dan

    Read the article

  • ODI 12c - Parallel Table Load

    - by David Allan
    In this post we will look at the ODI 12c capability of parallel table load from the aspect of the mapping developer and the knowledge module developer - two quite different viewpoints. This is about parallel table loading which isn't to be confused with loading multiple targets per se. It supports the ability for ODI mappings to be executed concurrently especially if there is an overlap of the datastores that they access, so any temporary resources created may be uniquely constructed by ODI. Temporary objects can be anything basically - common examples are staging tables, indexes, views, directories - anything in the ETL to help the data integration flow do its job. In ODI 11g users found a few workarounds (such as changing the technology prefixes - see here) to build unique temporary names but it was more of a challenge in error cases. ODI 12c mappings by default operate exactly as they did in ODI 11g with respect to these temporary names (this is also true for upgraded interfaces and scenarios) but can be configured to support the uniqueness capabilities. We will look at this feature from two aspects; that of a mapping developer and that of a developer (of procedures or KMs). 1. Firstly as a Mapping Developer..... 1.1 Control when uniqueness is enabled A new property is available to set unique name generation on/off. When unique names have been enabled for a mapping, all temporary names used by the collection and integration objects will be generated using unique names. This property is presented as a check-box in the Property Inspector for a deployment specification. 1.2 Handle cleanup after successful execution Provided that all temporary objects that are created have a corresponding drop statement then all of the temporary objects should be removed during a successful execution. This should be the case with the KMs developed by Oracle. 1.3 Handle cleanup after unsuccessful execution If an execution failed in ODI 11g then temporary tables would have been left around and cleaned up in the subsequent run. In ODI 12c, KM tasks can now have a cleanup-type task which is executed even after a failure in the main tasks. These cleanup tasks will be executed even on failure if the property 'Remove Temporary Objects on Error' is set. If the agent was to crash and not be able to execute this task, then there is an ODI tool (OdiRemoveTemporaryObjects here) you can invoke to cleanup the tables - it supports date ranges and the like. That's all there is to it from the aspect of the mapping developer it's much, much simpler and straightforward. You can now execute the same mapping concurrently or execute many mappings using the same resource concurrently without worrying about conflict.  2. Secondly as a Procedure or KM Developer..... In the ODI Operator the executed code shows the actual name that is generated - you can also see the runtime code prior to execution (introduced in 11.1.1.7), for example below in the code type I selected 'Pre-executed Code' this lets you see the code about to be processed and you can also see the executed code (which is the default view). References to the collection (C$) and integration (I$) names will be automatically made unique by using the odiRef APIs - these objects will have unique names whenever concurrency has been enabled for a particular mapping deployment specification. It's also possible to use name uniqueness functions in procedures and your own KMs. 2.1 New uniqueness tags  You can also make your own temporary objects have unique names by explicitly including either %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG in the name passed to calls to the odiRef APIs. Such names would always include the unique tag regardless of the concurrency setting. To illustrate, let's look at the getObjectName() method. At <% expansion time, this API will append %UNIQUE_STEP_TAG to the object name for collection and integration tables. The name parameter passed to this API may contain  %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. This API always generates to the <? version of getObjectName() At execution time this API will replace the unique tag macros with a string that is unique to the current execution scope. The returned name will conform to the name-length restriction for the target technology, and its pattern for the unique tag. Any necessary truncation will be performed against the initial name for the object and any other fixed text that may have been specified. Examples are:- <?=odiRef.getObjectName("L", "%COL_PRFEMP%UNIQUE_STEP_TAG", "D")?> SCOTT.C$_EABH7QI1BR1EQI3M76PG9SIMBQQ <?=odiRef.getObjectName("L", "EMP%UNIQUE_STEP_TAG_AE", "D")?> SCOTT.EMPAO96Q2JEKO0FTHQP77TMSAIOSR_ Methods which have this kind of support include getFrom, getTableName, getTable, getObjectShortName and getTemporaryIndex. There are APIs for retrieving this tag info also, the getInfo API has been extended with the following properties (the UNIQUE* properties can also be used in ODI procedures); UNIQUE_STEP_TAG - Returns the unique value for the current step scope, e.g. 5rvmd8hOIy7OU2o1FhsF61 Note that this will be a different value for each loop-iteration when the step is in a loop. UNIQUE_SESSION_TAG - Returns the unique value for the current session scope, e.g. 6N38vXLrgjwUwT5MseHHY9 IS_CONCURRENT - Returns info about the current mapping, will return 0 or 1 (only in % phase) GUID_SRC_SET - Returns the UUID for the current source set/execution unit (only in % phase) The getPop API has been extended with the IS_CONCURRENT property which returns info about an mapping, will return 0 or 1.  2.2 Additional APIs Some new APIs are provided including getFormattedName which will allow KM developers to construct a name from fixed-text or ODI symbols that can be optionally truncate to a max length and use a specific encoding for the unique tag. It has syntax getFormattedName(String pName[, String pTechnologyCode]) This API is available at both the % and the ? phase.  The format string can contain the ODI prefixes that are available for getObjectName(), e.g. %INT_PRF, %COL_PRF, %ERR_PRF, %IDX_PRF alongwith %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. The latter tags will be expanded into a unique string according to the specified technology. Calls to this API within the same execution context are guaranteed to return the same unique name provided that the same parameters are passed to the call. e.g. <%=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")%> <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")?> C$_MY_TAB7wDiBe80vBog1auacS1xB_AE <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG.log", "FILE")?> C2_MY_TAB7wDiBe80vBog1auacS1xB.log 2.3 Name length generation  As part of name generation, the length of the generated name will be compared with the maximum length for the target technology and truncation may need to be applied. When a unique tag is included in the generated string it is important that uniqueness is not compromised by truncation of the unique tag. When a unique tag is NOT part of the generated name, the name will be truncated by removing characters from the end - this is the existing 11g algorithm. When a unique tag is included, the algorithm will first truncate the <postfix> and if necessary  the <prefix>. It is recommended that users will ensure there is sufficient uniqueness in the <prefix> section to ensure uniqueness of the final resultant name. SUMMARY To summarize, ODI 12c make it much simpler to utilize mappings in concurrent cases and provides APIs for helping developing any procedures or custom knowledge modules in such a way they can be used in highly concurrent, parallel scenarios. 

    Read the article

  • Should I include HTML markup in my JSON response?

    - by Mike M. Lin
    In an e-commerce site, when adding an item to a cart, I'd like to show a popup window with the options you can choose. Imagine you're ordering an iPod Shuffle and now you have to choose the color and text to engrave. I'd like the window to be modal, so I'm using a lightbox populated by an Ajax call. Now I have two options: Option 1: Send only the data, and generate the HTML markup using JavaScript What's nice about this is that it trims down the Ajax request to the bear minimum and doesn't mix the data with the markup. What's not so great about this is that now I need to use JavaScript to do my rendering, instead of having a template engine on the server-side do it. I might be able to clean up the approach a bit by using a client-side templating solution. Option 2: Send the HTML markup What's good about this is that I can have the same server-side templating engine I'm using for the rest of my rendering tasks (Django), do the rendering of the lightbox. JavaScript is only used to insert the HTML fragment into the page. So it clearly leaves the rendering to the rendering engine. Makes sense to me. But I don't feel comfortable mixing data and markup in an Ajax call for some reason. I'm not sure what makes me feel uneasy about it. I mean, it's the same way every web page is served up -- data plus markup -- right?

    Read the article

  • Should one generally develop a client library for REST services to help prevent API breakages?

    - by BestPractices
    We have a project where UI code will be developed by the same team but in a different language (Python/Django) from the services layer (REST/Java). The code for each layer exits in different code repositories and which can follow different release cycles. I'm trying to come up with a process that will prevent/reduce breaking changes in the services layer from the perspective of the UI layer. I've thought to write integration tests at the UI layer level that we'll run whenever we build the UI or the services layer (we're using Jenkins as our CI tool to build the code which is in two Git repos) and if there are failures then something in the services layer broke and the commit is not accepted. Would it also be a good idea (is it a best practice?) to have the developer of the services layer create and maintain a client library for the REST service that exists in the UI layer that they will update whenever there is a breaking change in their Service API? Conceivably, we would then have the advantage of a statically-typed API that the UI code builds against. If the client library API changes, then the UI code won't compile (so we'll know sooner that there was a breaking change). I'd also still run the integration tests upon building the UI or services layer to further validate that the integration between UI and the service(s) still works.

    Read the article

  • Merge two different API calls into One

    - by dhilipsiva
    I have two different apps in my django project. One is "comment" and an other one is "files". A comment might save some file attached to it. The current way of creating a comment with attachments is by making two API calls. First one creates an actual comment and replies with the comment ID which serves as foreign key for the Files. Then for each file, a new request is made with the comment ID. Please note that file is a generic app, that can be used with other apps too. What is the cleanest way of making this into one API call? I want to have this as a single API call because I am in a situation where I need to send user an email with all the files as attachment when a comment is made. I know Queueing is the ideal way to do it. But I don't have the liberty to add queing to our stack now. So this was the only way I could think of.

    Read the article

  • Should I use mod_wsgi embedded mode if I have full control of Apache?

    - by mgibsonbr
    I'm managing a bunch of sites and applications in a shared hosting, using Django via mod_wsgi. I had planned to use daemon mode from the beginning (to avoid restart problems), but ended up purchasing a plan that allows me to run a dedicated Apache instance. I kept using daemon mode for convenience, but I'm afraid it's consuming more server resources than it should (I have different projects for each site, each with its own process and process group), so I'm considering switching to embedded mode. Would that be a sensible thing to do? I'd still be able to restart Apache anytime I need to, and I wouldn't need so many child processes and sockets (so I hope the resource usage would decrease). But I'm unsure whether or not doing so would make it more difficult to manage those sites (if I need to update one, I have to restart all) or maybe the applications won't be properly isolated from one another. Are these problems really significant (or only a minor nuisance), are there other drawbacks I coudn't foresee? I'm looking for advice in any aspect of this setup - mainainability, performance, security etc. Tips for improving the current setup are also welcome (I know how to correctly configure a basic mod_wsgi setup, but I'm clueless about sensible values for threads, processes etc).

    Read the article

  • 'Buy the app' landing page implementations: redirect or javascript popup?

    - by benwad
    My site (using Django) has an app that I'm trying to push - I currently have a piece of middleware that redirects the user to a page advertising the app if they're accessing the page on the iPhone, then setting a cookie so that the user isn't bugged by the message every time they visit the site. This works fine, however checking the page with the mobile Googlebot checker shows that the Googlebot gets stuck in the redirect (since it doesn't store cookies) and therefore won't index the proper content. So, I'm trying to think of an alternative implementation that won't hurt the site's Google ranking and won't have any other adverse effects. I've considered a couple of options: Redirect (the current solution), but don't redirect if the user agent matches the Googlebot's UA string. This would be ideal, however I'm not sure if Google like their bot being treated differently from other users, and I'm afraid the site's ranking may be somehow penalised if I go ahead with this. Use a Javascript popup instead of a redirect. This would make sure the Googlebot finds the content it needs, however I envision this approach causing compatibility issues with the myriad mobile devices/browsers out there, and may affect the page load time. How valid are these options? And is there a better option for implementing this feature out there? I've tried researching this topic but surprisingly can't find any reputable-looking blog posts that explore this topic.

    Read the article

  • Storing editable site content?

    - by hmp
    We have a Django-based website for which we wanted to make some of the content (text, and business logic such as pricing plans) easily editable in-house, and so we decided to store it outside the codebase. Usually the reason is one of the following: It's something that non-technical people want to edit. One example is copywriting for a website - the programmers prepare a template with text that defaults to "Lorem ipsum...", and the real content is inserted later to the database. It's something that we want to be able to change quickly, without the need to deploy new code (which we currently do twice a week). An example would be features currently available to the customers at different tiers of pricing. Instead of hardcoding these, we read them from database. The described solution is flexible but there are some reasons why I don't like it. Because the content has to be read from the database, there is a performance overhead. We mitigate that by using a caching scheme, but this also adds some complexity to the system. Developers who run the code locally see the system in a significantly different state compared to how it runs on production. Automated tests also exercise the system in a different state. Situations like testing new features on a staging server also get trickier - if the staging server doesn't have a recent copy of the database, it can be unexpectedly different from production. We could mitigate that by committing the new state to the repository occasionally (e.g. by adding data migrations), but it seems like a wrong approach. Is it? Any ideas how best to solve these problems? Is there a better approach for handling the content that I'm overlooking?

    Read the article

  • 'Buy the app' landing page implementations

    - by benwad
    My site (using Django) has an app that I'm trying to push - I currently have a piece of middleware that redirects the user to a page advertising the app if they're accessing the page on the iPhone, then setting a cookie so that the user isn't bugged by the message every time they visit the site. This works fine, however checking the page with the mobile Googlebot checker shows that the Googlebot gets stuck in the redirect (since it doesn't store cookies) and therefore won't index the proper content. So, I'm trying to think of an alternative implementation that won't hurt the site's Google ranking and won't have any other adverse effects. I've considered a couple of options: Redirect (the current solution), but don't redirect if the user agent matches the Googlebot's UA string. This would be ideal, however I'm not sure if Google like their bot being treated differently from other users, and I'm afraid the site's ranking may be somehow penalised if I go ahead with this. Use a Javascript popup instead of a redirect. This would make sure the Googlebot finds the content it needs, however I envision this approach causing compatibility issues with the myriad mobile devices/browsers out there, and may affect the page load time. How valid are these options? And is there a better option for implementing this feature out there? I've tried researching this topic but surprisingly can't find any reputable-looking blog posts that explore this topic. EDIT: I posted this on SF because it seemed unsuitable for SO, but if there's another site that would be better for this issue then I'd be happy to move the question elsewhere.

    Read the article

  • node.js on multi-core machines

    - by zaharpopov
    node.js looks interesting BUT... I must miss something - isn't node.js tuned only to run on a single process & thread? Then how does it scale for multi-core CPUs and multi-CPU servers? After all, it is all great to make fast as possible single-thread server, but for high loads I would want to use several CPUs. And the same goes for making applications faster - seems today the way is use multiple CPUs and parallelize the tasks. How does node.js fit into this picture? Is its idea to somehow distribute multiple instances or what?

    Read the article

  • Azure Table Storage data Consistency

    - by SeeR
    Let say I have Table in Azure Table Storage public class MyTable { public string PK {get; set;} public string RowPK {get; set;} public double Amount {get; set;} } And message in Azure Queue which says Add 10 to Amount. Now let say one worker role Takes this message from queue Takes row from table Amount += 10 Updates Row in Table And Fails After a while message is available in queue again. So next worker role: Takes this message from queue Takes row from table Amount += 10 Updates Row in Table Removes message from queue Those actions results in Amount += 20 instead of Amount += 10. How can I avoid such situations?

    Read the article

  • Create multi-part message in MIME format Freemarker template

    - by Mat Banik
    How do you create email message that contains text and HTML version for the same content? Of course I would like to know how to setup the freemarker template or the header of the message that will be send. When I look on the source of message multi-part message in MIME format that I receive in inbox every once in while this is what is in there: This is a multi-part message in MIME format. ------=_NextPart_000_B10D_01CBAAA8.F29DB300 Content-Type: text/plain Content-Transfer-Encoding: 7bit ...Text here... ------=_NextPart_000_B10D_01CBAAA8.F29DB300 Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable <html><body> html code here ... </body></html>

    Read the article

  • Open Source Web Frameworks : Security

    - by trappedIntoCode
    How secure are popular open source web frameworks? I am particularly interested in popular frameworks like Rails and DJango. If I am building a site which is going to do heavy e-commerce, is it Ok to use frameworks like DJango and Satchmo? Is security compromised because their open architecture ? I know being OS does not mean being down right open to hackers, Linux uses superb authentication mechanism, but web is a different game. What can be done in this regard? UPDATE: Thanks for answers guys. I understand that I will have to find a suitable hosting service for a secure e-commerce application and that additional layers of security will be needed. I understand that Django and Rails have been designed keeping security aspects in mind, the most common form attacks like XSS, Injections etc. (Django book has a ch on Security) I was expecting comments from security Gurus. If you are a security Guru, would you recommend an important site, which is likely going to be popular, to be built on DJango or Rails?

    Read the article

  • SQL Server, temporary tables with truncate vs table variable with delete

    - by Richard
    I have a stored procedure inside which I create a temporary table that typically contains between 1 and 10 rows. This table is truncated and filled many times during the stored procedure. It is truncated as this is faster than delete. Do I get any performance increase by replacing this temporary table with a table variable when I suffer a penalty for using delete (truncate does not work on table variables) Whilst table variables are mainly in memory and are generally faster than temp tables do I loose any benefit by having to delete rather than truncate?

    Read the article

  • MS Access 2003 - VBA for altering a table after a "SELECT * INTO tblTemp FROM tblMain" statement

    - by Justin
    Hi. I use functions like the following to make temporary tables out of crosstabs queries. Function SQL_Tester() Dim sql As String If DCount("*", "MSysObjects", "[Name]='tblTemp'") Then DoCmd.DeleteObject acTable, "tblTemp" End If sql = "SELECT * INTO tblTemp from TblMain;" Debug.Print (sql) Set db = CurrentDb db.Execute (sql) End Function I do this so that I can then use more vba to take the temporary table to excel, use some of excel functionality (formulas and such) and then return the values to the original table (tblMain). Simple spot i am getting tripped up is that after the Select INTO statement I need to add a brand new additional column to that temporary table and I do not know how to do this: sql = "Create Table..." is like the only way i know how to do this and of course this doesn't work to well with the above approach because I can't create a table that has already been created after the fact, and I cannot create it before because the SELECT INTO statement approach will return a "table already exists" message. Any help? thanks guys!

    Read the article

  • Haystack / Whoosh Index Generation Error

    - by Keith Fitzgerald
    I'm trying to setup haystack with whoosh backend. When i try to gen the index [or any index command for that matter] i receive: TypeError: Item in ``from list'' not a string if i completely remove my search_indexes.py i get the same error [so i'm guessing it can't find that file at all] what might cause this error? it's set to autodiscover and i'm sure my app is installed because i'm currently using it. Full traceback: Traceback (most recent call last): File "./manage.py", line 17, in <module> execute_manager(settings) File "/Users/ghostrocket/Development/Redux/.dependencies/django/core/management/__init__.py", line 362, in execute_manager utility.execute() File "/Users/ghostrocket/Development/Redux/.dependencies/django/core/management/__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Users/ghostrocket/Development/Redux/.dependencies/django/core/management/__init__.py", line 257, in fetch_command klass = load_command_class(app_name, subcommand) File "/Users/ghostrocket/Development/Redux/.dependencies/django/core/management/__init__.py", line 67, in load_command_class module = import_module('%s.management.commands.%s' % (app_name, name)) File "/Users/ghostrocket/Development/Redux/.dependencies/django/utils/importlib.py", line 35, in import_module __import__(name) File "/Users/ghostrocket/Development/Redux/.dependencies/haystack/__init__.py", line 124, in <module> handle_registrations() File "/Users/ghostrocket/Development/Redux/.dependencies/haystack/__init__.py", line 121, in handle_registrations search_sites_conf = __import__(settings.HAYSTACK_SITECONF) File "/Users/ghostrocket/Development/Redux/website/../website/search_sites.py", line 2, in <module> haystack.autodiscover() File "/Users/ghostrocket/Development/Redux/.dependencies/haystack/__init__.py", line 83, in autodiscover app_path = __import__(app, {}, {}, [app.split('.')[-1]]).__path__ TypeError: Item in ``from list'' not a string and here is my search_indexes.py from haystack import indexes from haystack import site from myproject.models import * site.register(myobject)

    Read the article

  • insert or update on table violates foreign key constraint

    - by sprasad12
    Hi, I have two tables entitytype and project. Here are the create table statements: Create table project ( pname varchar(20) not null, primary key(pname) ); create table entitytype( entityname varchar(20) not null, toppos char(100), leftpos char(100), pname varchar(20) not null, primary key(entityname), foreign key(pname) references project(pname) on delete cascade on update cascade ); Now when i try to insert any values into entitytype table i am getting the following error: ERROR: insert or update on table "entitytype" violates foreign key constraint "entitytype_pname_fkey" Detail: Key (pname)=(494) is not present in table "project". Can someone please shed some light on what i am doing wrong. Any input will be of great help. Thank you.

    Read the article

  • Serve static media on "nginx"

    - by MMRUSer
    My django application hosted on Apache, and now I want to serve its static media through nginx, I don't have any prior experience in nginx...plus currently the static media is serve through Apache.. expecting some helping hand. Apache 2.2 mod_wsgi nignx-0.7.65 Django 1.1.1 Thanks..

    Read the article

  • Create automatically table that is composed by its primery key and 2 other foreign keys

    - by user210481
    I have table A with a primary id, a table B with a primary key id also and another table C with a primary key id and rows Aid and Bid where Aid and Bid are foreign keys of C and the primary keys of A and B respectively. One way of populating these DB would be populating table A, table B and table C separately. I would like to know if there is a more clever way of doing it, where I could populate A first and at the same time I populate B, I can indicate that an entry in C should be also created. Any idea and suggestion on how to populate them is welcome. I'm trying to take advantage of the foreign keys structure Thanks

    Read the article

  • Question on multi-probe Local Sensitive Hashing

    - by Yijinsei
    Hey guys sorry to be asking this kind noob question, but because I really need some guidance on how to use Multi probe LSH pretty urgently, so I did not do much research myself. I realize there is a lib call LSHKIT available that implemented that algorithm, but I have trouble trying to figure out how to use it. Right now, I have a few thousand feature vector 296 dimension, each representing an image. The vector is used to query an user input image, to retrieve the most similar image. The method I used to derive the distance between vector is euclidean distance. I know this might be a rather noob question, but do you guys have knowledge on how should i implement multi probe LSH? I am really very grateful to any answer or response.

    Read the article

  • is there a way to generate pdf containing non-ascii symbols with pisa from django template?

    - by mihailt
    Hi. i'm trying to generate a pdf from template using this snippet: def write_pdf(template_src, context_dict): template = get_template(template_src) context = Context(context_dict) html = template.render(context) result = StringIO.StringIO() pdf = pisa.pisaDocument(StringIO.StringIO(html.encode("UTF-8")), result) if not pdf.err: return http.HttpResponse(result.getvalue(), mimetype='application/pdf') except Exception('PDF error') but all non-latin symbols are not showing correctly, the template and view are saved using utf-8 encoding. i've tried saving view as ANSI and then to user unicode(html,"UTF-8"), but it throws TypeError. Also i thought that maybe it's because the default fonts somehow do not support utf-8 so according to pisa documentation i tried to set fontface in template body in style section. that still gave no results. Does any one have some ideas how to solve this issue?

    Read the article

  • Django form linking 2 models by many to many field.

    - by Ed
    I have two models: class Actor(models.Model): name = models.CharField(max_length=30, unique = True) event = models.ManyToManyField(Event, blank=True, null=True) class Event(models.Model): name = models.CharField(max_length=30, unique = True) long_description = models.TextField(blank=True, null=True) I want to create a form that allows me to identify the link between the two models when I add a new entry. This works: class ActorForm(forms.ModelForm): class Meta: model = Actor The form includes both name and event, allowing me to create a new Actor and simultaneous link it to an existing Event. On the flipside, class EventForm(forms.ModelForm): class Meta: model = Event This form does not include an actor association. So I am only able to create a new Event. I can't simultaneously link it to an existing Actor. I tried to create an inline formset: EventFormSet = forms.models.inlineformset_factory(Event, Actor, can_delete = False, extra = 2, form = ActorForm) but I get an error <'class ctg.dtb.models.Actor'> has no ForeignKey to <'class ctg.dtb.models.Event'> This isn't too surprising. The inlineformset worked for another set of models I had, but this is a different example. I think I'm going about it entirely wrong. Overall question: How can I create a form that allows me to create a new Event and link it to an existing Actor?

    Read the article

  • How to embed html table into the bpdy of email

    - by Michael Mao
    Hi all: I am sending info to target email via PHP native mail() method right now. Everything else works fine but the table part troubles me the most. See sample output : Dear Michael Mao : Thank you for purchasing flight tickets with us, here is your receipt : Your tickets will be delivered by mail to the following address : Street Address 1 : sdfsdafsadf sdf Street Address 2 : N/A City : Sydney State : nsw Postcode : 2 Country : Australia Credit Card Number : *************1234 Your purchase details are recorded as : <table><tr><th class="delete">del?</th><th class="from_city">from</th><th class="to_city">to</th><th class="quantity">qty</th><th class="price">unit price</th><th class="price">total price</th></tr><tr class="evenrow" id="Sydney-Lima"><td><input name="isDeleting" type="checkbox"></td><td>Sydney</td><td>Lima</td><td>1</td><td>1030.00</td><td>1030</td></tr><tr class="oddrow" id="Sydney-Perth"><td><input name="isDeleting" type="checkbox"></td><td>Sydney</td><td>Perth</td><td>3</td><td>340.00</td><td>1020</td></tr><tr class="totalprice"><td colspan="5">Grand Total Price</td><td id="grandtotal">2050</td></tr></table> The source of table is directly taken from a webpage, exactly as the same. However, Gmail, Hotmail and most of other emails will ignore to render this as a table. So I am wondering, without using Outlook or other email sending agent software, how could I craft a embedded table for the PHP mail() method to send? Current code snippet corresponds to table generation : $purchaseinfo = $_POST["purchaseinfo"]; //if html tags are not to be filtered in the body of email $stringBuilder .= "<table>" .stripslashes($purchaseinfo) ."</table>"; //must send json response back to caller ajax request if(mail($email, 'Your purchase information on www.hardlyworldtravel.com', $emailbody, $headers)) echo json_encode(array("feedback"=>"successful")); else echo json_encode(array("feedback"=>"error")); Any hints and suggestions are welcomed, thanks a lot in advance.

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >