Search Results

Search found 1367 results on 55 pages for 'matthew pk'.

Page 2/55 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • django blog - post- reply system display replies

    - by dana
    I have a mini blog app, and a reply system. I want to list all mini blog entries, and their replies, if there are any. i have in views.py def profile_view(request, id): u = UserProfile.objects.get(pk=id) paginator = New.objects.filter(created_by = request.user) replies = Reply.objects.filter(reply_to = paginator) return render_to_response('profile/publicProfile.html', { 'object_list': u, 'list':paginator, 'replies':replies }, context_instance=RequestContext(request)) and in the template: <h3>Recent Entries:</h3> {% for object in list %} <li>{{ object.post }} <br /> {% for object in replies %} {{ object.reply }} <br /> {% endfor %} mention : reply_to is a ForeignKey to New, and New is the name of the 'mini blog' table But it only shows all the replies for each blog entry, not the reply for every entry, if there is one thanks

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Access - how to derive a field upon entry of a record

    - by jonos
    I have an Access database for a media rental company which includes the following tables, among others; LOAN: customer_id (pk), loan_datetimeLeant (pk), loan_dateReturned (pk) LOAN_ITEMS: customer_id (pk), loan_datetimeLeant (pk), item_id (pk), loanItem_cost ITEM: item_id (pk), product_id, item_availability PRODUCT: product_id (pk), product_name, product_type MEDIA_COST: product_type (pk), product_cost So basically the 'product type' (DVD, VHS etc) determines the cost. The 'product id' determines what movie, platform etc each item is. My question is: When creating a form for the Loan_Items table, how can I populate the loanItem_cost field (so it is stored) whenever a new Loan_Item record is added? I need to store it as the cost of the item (product_cost) may change over time and I'd like to record what the customer paid at the time the loan was made. Thanks in advance

    Read the article

  • Is it good practice to keep 2 related tables (using auto_increment PK) to have the same Max of auto_increment ID when table1 got modified?

    - by Tum
    This question is about good design practice in programming. Let see this example, we have 2 interrelated tables: Table1 textID - text 1 - love.. 2 - men... ... Table2 rID - textID 1 - 1 2 - 2 ... Note: In Table1: textID is auto_increment primary key In Table2: rID is auto_increment primary key & textID is foreign key The relationship is that 1 rID will have 1 and only 1 textID but 1 textID can have a few rID. So, when table1 got modification then table2 should be updated accordingly. Ok, here is a fictitious example. You build a very complicated system. When you modify 1 record in table1, you need to keep track of the related record in table2. To keep track, you can do like this: Option 1: When you modify a record in table1, you will try to modify a related record in table 2. This could be quite hard in term of programming expecially for a very very complicated system. Option 2: instead of modifying a related record in table2, you decided to delete old record in table 2 & insert new one. This is easier for you to program. For example, suppose you are using option2, then when you modify record 1,2,3,....,100 in table1, the table2 will look like this: Table2 rID - textID 101 - 1 102 - 2 ... 200 - 100 This means the Max of auto_increment IDs in table1 is still the same (100) but the Max of auto_increment IDs in table2 already reached 200. what if the user modify many times? if they do then the table2 may run out of records? we can use BigInt but that make the app run slower? Note: If you spend time to program to modify records in table2 when table1 got modified then it will be very hard & thus it will be error prone. But if you just clear the old record & insert new records into table2 then it is much easy to program & thus your program is simpler & less error prone. So, is it good practice to keep 2 related tables (using auto_increment PK) to have the same Max of auto_increment ID when table1 got modified?

    Read the article

  • IF adding new Entity gives error me : EntityCommandCompilationException was unhandled bu user code

    - by programmerist
    i have 5 tables in started projects. if i adds new table (Urun enttiy) writing below codes: project.BAL : public static List<Urun> GetUrun() { using (GenoTipSatisEntities genSatisUrunCtx = new GenoTipSatisEntities()) { ObjectQuery<Urun> urun = genSatisUrunCtx.Urun; return urun.ToList(); } } if i receive data form BAL in UI.aspx: using project.BAL; namespace GenoTip.Web.ContentPages.Satis { public partial class SatisUrun : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { FillUrun(); } } void FillUrun() { ddlUrun.DataSource = SatisServices.GetUrun(); ddlUrun.DataValueField = "ID"; ddlUrun.DataTextField = "Ad"; ddlUrun.DataBind(); } } } i added URun later. error appears ToList method: EntityCommandCompilationException was unhandled bu user code error Detail: Error 1 Error 3007: Problem in Mapping Fragments starting at lines 659, 873: Non-Primary-Key column(s) [UrunID] are being mapped in both fragments to different conceptual side properties - data inconsistency is possible because the corresponding conceptual side properties can be independently modified. C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 660 15 GenoTip.DAL Error 2 Error 3012: Problem in Mapping Fragments starting at lines 659, 873: Data loss is possible in FaturaDetay.UrunID. An Entity with Key (PK) will not round-trip when: (PK does NOT play Role 'FaturaDetay' in AssociationSet 'FK_FaturaDetay_Urun' AND PK is in 'FaturaDetay' EntitySet) C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 874 11 GenoTip.DAL Error 3 Error 3012: Problem in Mapping Fragments starting at lines 659, 873: Data loss is possible in FaturaDetay.UrunID. An Entity with Key (PK) will not round-trip when: (PK is in 'FaturaDetay' EntitySet AND PK does NOT play Role 'FaturaDetay' in AssociationSet 'FK_FaturaDetay_Urun' AND Entity.UrunID is not NULL) C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 660 15 GenoTip.DAL Error 4 Error 3007: Problem in Mapping Fragments starting at lines 748, 879: Non-Primary-Key column(s) [UrunID] are being mapped in both fragments to different conceptual side properties - data inconsistency is possible because the corresponding conceptual side properties can be independently modified. C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 749 15 GenoTip.DAL Error 5 Error 3012: Problem in Mapping Fragments starting at lines 748, 879: Data loss is possible in Satis.UrunID. An Entity with Key (PK) will not round-trip when: (PK does NOT play Role 'Satis' in AssociationSet 'FK_Satis_Urun' AND PK is in 'Satis' EntitySet) C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 880 11 GenoTip.DAL Error 6 Error 3012: Problem in Mapping Fragments starting at lines 748, 879: Data loss is possible in Satis.UrunID. An Entity with Key (PK) will not round-trip when: (PK is in 'Satis' EntitySet AND PK does NOT play Role 'Satis' in AssociationSet 'FK_Satis_Urun' AND Entity.UrunID is not NULL) C:\Users\pc\Desktop\GenoTip.Satis\GenoTip.DAL\ModelSatis.edmx 749 15 GenoTip.DAL

    Read the article

  • Complete Guide to Symbolic Links (symlinks) on Windows or Linux

    - by Matthew Guay
    Want to easily access folders and files from different folders without maintaining duplicate copies?  Here’s how you can use Symbolic Links to link anything in Windows 7, Vista, XP, and Ubuntu. So What Are Symbolic Links Anyway? Symbolic links, otherwise known as symlinks, are basically advanced shortcuts. You can create symbolic links to individual files or folders, and then these will appear like they are stored in the folder with the symbolic link even though the symbolic link only points to their real location. There are two types of symbolic links: hard and soft. Soft symbolic links work essentially the same as a standard shortcut.  When you open a soft link, you will be redirected to the folder where the files are stored.  However, a hard link makes it appear as though the file or folder actually exists at the location of the symbolic link, and your applications won’t know any different. Thus, hard links are of the most interest in this article. Why should I use Symbolic Links? There are many things we use symbolic links for, so here’s some of the top uses we can think of: Sync any folder with Dropbox – say, sync your Pidgin Profile Across Computers Move the settings folder for any program from its original location Store your Music/Pictures/Videos on a second hard drive, but make them show up in your standard Music/Pictures/Videos folders so they’ll be detected my your media programs (Windows 7 Libraries can also be good for this) Keep important files accessible from multiple locations And more! If you want to move files to a different drive or folder and then symbolically link them, follow these steps: Close any programs that may be accessing that file or folder Move the file or folder to the new desired location Follow the correct instructions below for your operating system to create the symbolic link. Caution: Make sure to never create a symbolic link inside of a symbolic link. For instance, don’t create a symbolic link to a file that’s contained in a symbolic linked folder. This can create a loop, which can cause millions of problems you don’t want to deal with. Seriously. Create Symlinks in Any Edition of Windows in Explorer Creating symlinks is usually difficult, but thanks to the free Link Shell Extension, you can create symbolic links in all modern version of Windows pain-free.  You need to download both Visual Studio 2005 redistributable, which contains the necessary prerequisites, and Link Shell Extension itself (links below).  Download the correct version (32 bit or 64 bit) for your computer. Run and install the Visual Studio 2005 Redistributable installer first. Then install the Link Shell Extension on your computer. Your taskbar will temporally disappear during the install, but will quickly come back. Now you’re ready to start creating symbolic links.  Browse to the folder or file you want to create a symbolic link from.  Right-click the folder or file and select Pick Link Source. To create your symlink, right-click in the folder you wish to save the symbolic link, select “Drop as…”, and then choose the type of link you want.  You can choose from several different options here; we chose the Hardlink Clone.  This will create a hard link to the file or folder we selected.  The Symbolic link option creates a soft link, while the smart copy will fully copy a folder containing symbolic links without breaking them.  These options can be useful as well.   Here’s our hard-linked folder on our desktop.  Notice that the folder looks like its contents are stored in Desktop\Downloads, when they are actually stored in C:\Users\Matthew\Desktop\Downloads.  Also, when links are created with the Link Shell Extension, they have a red arrow on them so you can still differentiate them. And, this works the same way in XP as well. Symlinks via Command Prompt Or, for geeks who prefer working via command line, here’s how you can create symlinks in Command Prompt in Windows 7/Vista and XP. In Windows 7/Vista In Windows Vista and 7, we’ll use the mklink command to create symbolic links.  To use it, we have to open an administrator Command Prompt.  Enter “command” in your start menu search, right-click on Command Prompt, and select “Run as administrator”. To create a symbolic link, we need to enter the following in command prompt: mklink /prefix link_path file/folder_path First, choose the correct prefix.  Mklink can create several types of links, including the following: /D – creates a soft symbolic link, which is similar to a standard folder or file shortcut in Windows.  This is the default option, and mklink will use it if you do not enter a prefix. /H – creates a hard link to a file /J – creates a hard link to a directory or folder So, once you’ve chosen the correct prefix, you need to enter the path you want for the symbolic link, and the path to the original file or folder.  For example, if I wanted a folder in my Dropbox folder to appear like it was also stored in my desktop, I would enter the following: mklink /J C:\Users\Matthew\Desktop\Dropbox C:\Users\Matthew\Documents\Dropbox Note that the first path was to the symbolic folder I wanted to create, while the second path was to the real folder. Here, in this command prompt screenshot, you can see that I created a symbolic link of my Music folder to my desktop.   And here’s how it looks in Explorer.  Note that all of my music is “really” stored in C:\Users\Matthew\Music, but here it looks like it is stored in C:\Users\Matthew\Desktop\Music. If your path has any spaces in it, you need to place quotes around it.  Note also that the link can have a different name than the file it links to.  For example, here I’m going to create a symbolic link to a document on my desktop: mklink /H “C:\Users\Matthew\Desktop\ebook.pdf”  “C:\Users\Matthew\Downloads\Before You Call Tech Support.pdf” Don’t forget the syntax: mklink /prefix link_path Target_file/folder_path In Windows XP Windows XP doesn’t include built-in command prompt support for symbolic links, but we can use the free Junction tool instead.  Download Junction (link below), and unzip the folder.  Now open Command Prompt (click Start, select All Programs, then Accessories, and select Command Prompt), and enter cd followed by the path of the folder where you saved Junction. Junction only creates hard symbolic links, since you can use shortcuts for soft ones.  To create a hard symlink, we need to enter the following in command prompt: junction –s link_path file/folder_path As with mklink in Windows 7 or Vista, if your file/folder path has spaces in it make sure to put quotes around your paths.  Also, as usual, your symlink can have a different name that the file/folder it points to. Here, we’re going to create a symbolic link to our My Music folder on the desktop.  We entered: junction -s “C:\Documents and Settings\Administrator\Desktop\Music” “C:\Documents and Settings\Administrator\My Documents\My Music” And here’s the contents of our symlink.  Note that the path looks like these files are stored in a Music folder directly on the Desktop, when they are actually stored in My Documents\My Music.  Once again, this works with both folders and individual files. Please Note: Junction would work the same in Windows 7 or Vista, but since they include a built-in symbolic link tool we found it better to use it on those versions of Windows. Symlinks in Ubuntu Unix-based operating systems have supported symbolic links since their inception, so it is straightforward to create symbolic links in Linux distros such as Ubuntu.  There’s no graphical way to create them like the Link Shell Extension for Windows, so we’ll just do it in Terminal. Open terminal (open the Applications menu, select Accessories, and then click Terminal), and enter the following: ln –s file/folder_path link_path Note that this is opposite of the Windows commands; you put the source for the link first, and then the path second. For example, let’s create a symbolic link of our Pictures folder in our Desktop.  To do this, we entered: ln -s /home/maguay/Pictures /home/maguay/Desktop   Once again, here is the contents of our symlink folder.  The pictures look as if they’re stored directly in a Pictures folder on the Desktop, but they are actually stored in maguay\Pictures. Delete Symlinks Removing symbolic links is very simple – just delete the link!  Most of the command line utilities offer a way to delete a symbolic link via command prompt, but you don’t need to go to the trouble.   Conclusion Symbolic links can be very handy, and we use them constantly to help us stay organized and keep our hard drives from overflowing.  Let us know how you use symbolic links on your computers! Download Link Shell Extension for Windows 7, Vista, and XP Download Junction for XP Similar Articles Productive Geek Tips Using Symlinks in Windows VistaHow To Figure Out Your PC’s Host Name From the Command PromptInstall IceWM on Ubuntu LinuxAdd Color Coding to Windows 7 Media Center Program GuideSync Your Pidgin Profile Across Multiple PCs with Dropbox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

  • Generate MERGE statements from a table

    - by Bill Graziano
    We have a requirement to build a test environment where certain tables get reset from production every night.  These are mainly lookup tables.  I played around with all kinds of fancy solutions and finally settled on a series of MERGE statements.  And being lazy I didn’t want to write them myself.  The stored procedure below will generate a MERGE statement for the table you pass it.  If you have identity values it populates those properly.  You need to have primary keys on the table for the joins to be generated properly.  The only thing hard coded is the source database.  You’ll need to update that for your environment.  We actually used a linked server in our situation. CREATE PROC dba_GenerateMergeStatement (@table NVARCHAR(128) )ASset nocount on; declare @return int;PRINT '-- ' + @table + ' -------------------------------------------------------------'--PRINT 'SET NOCOUNT ON;--'-- Set the identity insert on for tables with identitiesselect @return = objectproperty(object_id(@table), 'TableHasIdentity')if @return = 1 PRINT 'SET IDENTITY_INSERT [dbo].[' + @table + '] ON; 'declare @sql varchar(max) = ''declare @list varchar(max) = '';SELECT @list = @list + [name] +', 'from sys.columnswhere object_id = object_id(@table)SELECT @list = @list + [name] +', 'from sys.columnswhere object_id = object_id(@table)SELECT @list = @list + 's.' + [name] +', 'from sys.columnswhere object_id = object_id(@table)-- --------------------------------------------------------------------------------PRINT 'MERGE [dbo].[' + @table + '] AS t'PRINT 'USING (SELECT * FROM [source_database].[dbo].[' + @table + ']) as s'-- Get the join columns ----------------------------------------------------------SET @list = ''select @list = @list + 't.[' + c.COLUMN_NAME + '] = s.[' + c.COLUMN_NAME + '] AND 'from INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk , INFORMATION_SCHEMA.KEY_COLUMN_USAGE cwhere pk.TABLE_NAME = @tableand CONSTRAINT_TYPE = 'PRIMARY KEY'and c.TABLE_NAME = pk.TABLE_NAMEand c.CONSTRAINT_NAME = pk.CONSTRAINT_NAMESELECT @list = LEFT(@list, LEN(@list) -3)PRINT 'ON ( ' + @list + ')'-- WHEN MATCHED ------------------------------------------------------------------PRINT 'WHEN MATCHED THEN UPDATE SET'SELECT @list = '';SELECT @list = @list + ' [' + [name] + '] = s.[' + [name] +'],'from sys.columnswhere object_id = object_id(@table)-- don't update primary keysand [name] NOT IN (SELECT [column_name] from INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk , INFORMATION_SCHEMA.KEY_COLUMN_USAGE c where pk.TABLE_NAME = @table and CONSTRAINT_TYPE = 'PRIMARY KEY' and c.TABLE_NAME = pk.TABLE_NAME and c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME)-- and don't update identity columnsand columnproperty(object_id(@table), [name], 'IsIdentity ') = 0 --print @list PRINT left(@list, len(@list) -3 )-- WHEN NOT MATCHED BY TARGET ------------------------------------------------PRINT ' WHEN NOT MATCHED BY TARGET THEN';-- Get the insert listSET @list = ''SELECT @list = @list + '[' + [name] +'], 'from sys.columnswhere object_id = object_id(@table)SELECT @list = LEFT(@list, LEN(@list) - 1)PRINT ' INSERT(' + @list + ')'-- get the values listSET @list = ''SELECT @list = @list + 's.[' +[name] +'], 'from sys.columnswhere object_id = object_id(@table)SELECT @list = LEFT(@list, LEN(@list) - 1)PRINT ' VALUES(' + @list + ')'-- WHEN NOT MATCHED BY SOURCEprint 'WHEN NOT MATCHED BY SOURCE THEN DELETE; 'PRINT ''PRINT 'PRINT ''' + @table + ': '' + CAST(@@ROWCOUNT AS VARCHAR(100));';PRINT ''-- Set the identity insert OFF for tables with identitiesselect @return = objectproperty(object_id(@table), 'TableHasIdentity')if @return = 1 PRINT 'SET IDENTITY_INSERT [dbo].[' + @table + '] OFF; 'PRINT ''PRINT 'GO'PRINT '';

    Read the article

  • t-sql - find differences for pk's in a single table (Self join?)

    - by Pace
    Hi Experts, My situation is this. I have a table of products with a pk "Parent" which has "Components" The data looks something like this Parent(PK) Component Car1 Wheel Car1 Tyre Car1 Roof Car2 Alloy Car2 Tyre Car2 Roof Car3 Alloy Car3 Tyre Car3 Roof Car3 Leather Seats Now what I want to do is some query that I can feed two codes in and see the differences... IE If I feed in "Car1", "Car2" it would return something like; Parent Component Car1 Wheel Car2 Alloy As this is the difference between the two. If I said "Car1", "Car3" I would expect; Parent Component Car1 Wheel Car3 Alloy Car3 Leather Seats Your help with this matter would be greatly appreciated.

    Read the article

  • If I drop my clustered PK and add a new one, what order will my rows be in?

    - by stack
    In SQL Server, I'm looking at TableA, which currently has a uniqueidentifier clustered primary key. The GUID has no meaning in any context. (I'll give you a second to clean up your keyboard and monitor and set down the soda.) I'd like to drop that primary key and add a new unique integer primary key to the table. My question is this: when I drop the index, modify the column from uniqueidentifier to int, and add the new clustered unique primary key to the modified column, will the new PK values be in the order of insertion into the table, or will they be in some other order? Is this the right way to go here? Will this work? (I'm kind of a noobkin with regard to table creation/modification.)

    Read the article

  • Getting field of type bytea in helper table when using GenerationType.IDENTITY

    - by dtrunk
    I'm creating my db scheme using Hibernate. There's a Table called "tbl_articles" and another one called "tbl_categories". To have a n-n relationship a helper table ("tbl_articles_categories") is needed. Here are all necessary Entities: @Entity @Table( name = "tbl_articles" ) public class Article implements Serializable { private static final long serialVersionUID = 1L; @Id @Column( nullable = false ) @GeneratedValue( strategy = GenerationType.IDENTITY ) private Integer id; // other fields... public Integer getId() { return id; } public void setId( Integer id ) { this.id = id; } // other fields... } @Entity @Table( name = "tbl_categories" ) public class Category implements Serializable { private static final long serialVersionUID = 1L; @Id @Column( nullable = false ) @GeneratedValue( strategy = GenerationType.IDENTITY ) private Integer id; // other fields public Integer getId() { return id; } public void setId( Integer id ) { this.id = id; } // other fields... } @Entity @Table( name = "tbl_articles_categories" ) @AssociationOverrides({ @AssociationOverride( name = "pk.article", joinColumns = @JoinColumn( name = "article_id" ) ), @AssociationOverride( name = "pk.category", joinColumns = @JoinColumn( name = "category_id" ) ) }) public class ArticleCategory { private ArticleCategoryPK pk = new ArticleCategoryPK(); public void setPk( ArticleCategoryPK pk ) { this.pk = pk; } @EmbeddedId public ArticleCategoryPK getPk() { return pk; } @Transient public Article getArticle() { return pk.getArticle(); } public void setArticle( Article article ) { pk.setArticle( article ); } @Transient public Category getCategory() { return pk.getCategory(); } public void setCategory( Category category ) { pk.setCategory( category ); } } @Embeddable public class ArticleCategoryPK implements Serializable { private static final long serialVersionUID = 1L; @ManyToOne @ForeignKey( name = "tbl_articles_categories_fkey_article_id" ) private Article article; @ManyToOne @ForeignKey( name = "tbl_articles_categories_fkey_category_id" ) private Category category; public ArticleCategoryPK( Article article, Category category ) { setArticle( article ); setCategory( category ); } public ArticleCategoryPK() { } public Article getArticle() { return article; } public void setArticle( Article article ) { this.article = article; } public Category getCategory() { return category; } public void setCategory( Category category ) { this.category = category; } } Now, I'm getting a serial type what I wanted in my articles table as well as in my categories table. But looking into my helper table, there aren't the expected fields article_id and category_id each of type integer - instead there are article and category of type bytea. What's wrong here? EDIT: Sorry, forgot to mention that I'm using PostgreSQL.

    Read the article

  • Does Microsoft Access use the PK fields for anything?

    - by chrismay
    Ok this is going to sound strange, but I have inherited an app that is an Access front end with a SQL Server backend. I am in the process of writing a new front end for it, but... we need to continue using the access front end for a while even after we deploy my new front end for reasons I won't go into. So both the existing Access app and my new app will need to be able to access and work with the data. The problem is the database design is a nightmare. For example some simple parent-child table relationships have like 4 and 5 part composite primary keys. I would REALLY like to remove these PKs and replace them with unique constraints or whatever, and add a new column to each of these tables called ID that is just an identity. If I change the PK and FKs on these tables to more managable fields, will the Access app have problems? What I mean is, does access use the meta data from the tables (PK and FK info) in such a way that it would break the app to change these?

    Read the article

  • Database model for keeping track of likes/shares/comments on blog posts over time

    - by gage
    My goal is to keep track of the popular posts on different blog sites based on social network activity at any given time. The goal is not to simply get the most popular now, but instead find posts that are popular compared to other posts on the same blog. For example, I follow a tech blog, a sports blog, and a gossip blog. The tech blog gets waaay more readership than the other two blogs, so in raw numbers every post on the tech blog will always out number views on the other two. So lets say the average tech blog post gets 500 facebook likes and the other two get an average of 50 likes per post. Then when there is a sports blog post that has 200 fb likes and a gossip blog post with 300 while the tech blog posts today have 500 likes I want to highlight the sports and gossip blog posts (more likes than average vs tech blog with more # of likes but just average for the blog) The approach I am thinking of taking is to make an entry in a database for each blog post. Every x minutes (say every 15 minutes) I will check how many likes/shares/comments an entry has received on all the social networks (facebook, twitter, google+, linkeIn). So over time there will be a history of likes for each blog post, i.e post 1234 after 15 min: 10 fb likes, 4 tweets, 6 g+ after 30 min: 15 fb likes, 15 tweets, 10 g+ ... ... after 48 hours: 200 fb likes, 25 tweets, 15 g+ By keeping a history like this for each blog post I can know the average number of likes/shares/tweets at any give time interval. So for example the average number of fb likes for all blog posts 48hrs after posting is 50, and a particular post has 200 I can mark that as a popular post and feature/highlight it. A consideration in the design is to be able to easily query the values (likes/shares) for a specific time-frame, i.e. fb likes after 30min or tweets after 24 hrs in-order to compute averages with which to compare against (or should averages be stored in it's own table?) If this approach is flawed or could use improvement please let me know, but it is not my main question. My main question is what should a database scheme for storing this info look like? Assuming that the above approach is taken I am trying to figure out what a database schema for storing the likes over time would look like. I am brand new to databases, in doing some basic reading I see that it is advisable to make a 3NF database. I have come up with the following possible schema. Schema 1 DB Popular Posts Table: Post post_id ( primary key(pk) ) url title Table: Social Activity activity_id (pk) url (fk) type (i.e. facebook,twitter,g+) value timestamp This was my initial instinct (base on my very limited db knowledge). As far as I under stand this schema would be 3NF? I searched for designs of similar database model, and found this question on stackoverflow, http://stackoverflow.com/questions/11216080/data-structure-for-storing-height-and-weight-etc-over-time-for-multiple-users . The scenario in that question is similar (recording weight/height of users overtime). Taking the accepted answer for that question and applying it to my model results in something like: Schema 2 (same as above, but break down the social activity into 2 tables) DB Popular Posts Table: Post post_id (pk) url title Table: Social Measurement measurement_id (pk) post_id (fk) timestamp Table: Social stat stat_id (pk) measurement_id (fk) type (i.e. facebook,twitter,g+) value The advantage I see in schema 2 is that I will likely want to access all the values for a given time, i.e. when making a measurement at 30min after a post is published I will simultaneous check number of fb likes, fb shares, fb comments, tweets, g+, linkedIn. So with this schema it may be easier get get all stats for a measurement_id corresponding to a certain time, i.e. all social stats for post 1234 at time x. Another thought I had is since it doesn't make sense to compare number of fb likes with number of tweets or g+ shares, maybe it makes sense to separate each social measurement into it's own table? Schema 3 DB Popular Posts Table: Post post_id (pk) url title Table: fb_likes fb_like_id (pk) post_id (fk) timestamp value Table: fb_shares fb_shares_id (pk) post_id (fk) timestamp value Table: tweets tweets__id (pk) post_id (fk) timestamp value Table: google_plus google_plus_id (pk) post_id (fk) timestamp value As you can see I am generally lost/unsure of what approach to take. I'm sure this typical type of database problem (storing measurements overtime, i.e temperature statistic) that must have a common solution. Is there a design pattern/model for this, does it have a name? I tried searching for "database periodic data collection" or "database measurements over time" but didn't find anything specific. What would be an appropriate model to solve the needs of this problem?

    Read the article

  • jQuery $.data(): Possible misuse?

    - by Rosarch
    Perhaps I'm using $.data incorrectly. Assigning the data: var course_li = sprintf('<li class="draggable course">%s</li>', course["fields"]["name"]); $(course_li).data('pk', course['pk']); alert(course['pk']); // shows a correct value Moving the li to a different ul: function moveToTerm(item, term) { item.fadeOut(function() { item.appendTo(term).fadeIn(); }); } Trying to access the data later: $.each($(term).children(".course"), function(index, course) { var pk = $(course).data('pk'); // pk is undefined courses.push(pk); }); What am I doing wrong? I have confirmed that the course li on which I am setting the data is the same as the one on which I am looking for it. (Unless I'm messing that up by calling appendTo() on it?)

    Read the article

  • Creating a Dynamic DataRow for easier DataRow Syntax

    - by Rick Strahl
    I've been thrown back into an older project that uses DataSets and DataRows as their entity storage model. I have several applications internally that I still maintain that run just fine (and I sometimes wonder if this wasn't easier than all this ORM crap we deal with with 'newer' improved technology today - but I disgress) but use this older code. For the most part DataSets/DataTables/DataRows are abstracted away in a pseudo entity model, but in some situations like queries DataTables and DataRows are still surfaced to the business layer. Here's an example. Here's a business object method that runs dynamic query and the code ends up looping over the result set using the ugly DataRow Array syntax:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { string title = row["title"] as string; string safeTitle = row["safeTitle"] as string; int pk = (int)row["pk"]; string newSafeTitle = this.GetSafeTitle(title); if (newSafeTitle != safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",pk) ); result++; } } return result; } The problem with looping over DataRow objecs is two fold: The array syntax is tedious to type and not real clear to look at, and explicit casting is required in order to do anything useful with the values. I've highlighted the place where this matters. Using the DynamicDataRow class I'll show in a minute this code can be changed to look like this:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { dynamic entry = new DynamicDataRow(row); string newSafeTitle = this.GetSafeTitle(entry.title); if (newSafeTitle != entry.safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",entry.pk) ); result++; } } return result; } The code looks much a bit more natural and describes what's happening a little nicer as well. Well, using the new dynamic features in .NET it's actually quite easy to implement the DynamicDataRow class. Creating your own custom Dynamic Objects .NET 4.0 introduced the Dynamic Language Runtime (DLR) and opened up a whole bunch of new capabilities for .NET applications. The dynamic type is an easy way to avoid Reflection and directly access members of 'dynamic' or 'late bound' objects at runtime. There's a lot of very subtle but extremely useful stuff that dynamic does (especially for COM Interop scenearios) but in its simplest form it often allows you to do away with manual Reflection at runtime. In addition you can create DynamicObject implementations that can perform  custom interception of member accesses and so allow you to provide more natural access to more complex or awkward data structures like the DataRow that I use as an example here. Bascially you can subclass DynamicObject and then implement a few methods (TryGetMember, TrySetMember, TryInvokeMember) to provide the ability to return dynamic results from just about any data structure using simple property/method access. In the code above, I created a custom DynamicDataRow class which inherits from DynamicObject and implements only TryGetMember and TrySetMember. Here's what simple class looks like:/// <summary> /// This class provides an easy way to turn a DataRow /// into a Dynamic object that supports direct property /// access to the DataRow fields. /// /// The class also automatically fixes up DbNull values /// (null into .NET and DbNUll to DataRow) /// </summary> public class DynamicDataRow : DynamicObject { /// <summary> /// Instance of object passed in /// </summary> DataRow DataRow; /// <summary> /// Pass in a DataRow to work off /// </summary> /// <param name="instance"></param> public DynamicDataRow(DataRow dataRow) { DataRow = dataRow; } /// <summary> /// Returns a value from a DataRow items array. /// If the field doesn't exist null is returned. /// DbNull values are turned into .NET nulls. /// /// </summary> /// <param name="binder"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; try { result = DataRow[binder.Name]; if (result == DBNull.Value) result = null; return true; } catch { } result = null; return false; } /// <summary> /// Property setter implementation tries to retrieve value from instance /// first then into this object /// </summary> /// <param name="binder"></param> /// <param name="value"></param> /// <returns></returns> public override bool TrySetMember(SetMemberBinder binder, object value) { try { if (value == null) value = DBNull.Value; DataRow[binder.Name] = value; return true; } catch {} return false; } } To demonstrate the basic features here's a short test: [TestMethod] [ExpectedException(typeof(RuntimeBinderException))] public void BasicDataRowTests() { DataTable table = new DataTable("table"); table.Columns.Add( new DataColumn() { ColumnName = "Name", DataType=typeof(string) }); table.Columns.Add( new DataColumn() { ColumnName = "Entered", DataType=typeof(DateTime) }); table.Columns.Add(new DataColumn() { ColumnName = "NullValue", DataType = typeof(string) }); DataRow row = table.NewRow(); DateTime now = DateTime.Now; row["Name"] = "Rick"; row["Entered"] = now; row["NullValue"] = null; // converted in DbNull dynamic drow = new DynamicDataRow(row); string name = drow.Name; DateTime entered = drow.Entered; string nulled = drow.NullValue; Assert.AreEqual(name, "Rick"); Assert.AreEqual(entered,now); Assert.IsNull(nulled); // this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); } The DynamicDataRow requires a custom constructor that accepts a single parameter that sets the DataRow. Once that's done you can access property values that match the field names. Note that types are automatically converted - no type casting is needed in the code you write. The class also automatically converts DbNulls to regular nulls and vice versa which is something that makes it much easier to deal with data returned from a database. What's cool here isn't so much the functionality - even if I'd prefer to leave DataRow behind ASAP -  but the fact that we can create a dynamic type that uses a DataRow as it's 'DataSource' to serve member values. It's pretty useful feature if you think about it, especially given how little code it takes to implement. By implementing these two simple methods we get to provide two features I was complaining about at the beginning that are missing from the DataRow: Direct Property Syntax Automatic Type Casting so no explicit casts are required Caveats As cool and easy as this functionality is, it's important to understand that it doesn't come for free. The dynamic features in .NET are - well - dynamic. Which means they are essentially evaluated at runtime (late bound). Rather than static typing where everything is compiled and linked by the compiler/linker, member invokations are looked up at runtime and essentially call into your custom code. There's some overhead in this. Direct invocations - the original code I showed - is going to be faster than the equivalent dynamic code. However, in the above code the difference of running the dynamic code and the original data access code was very minor. The loop running over 1500 result records took on average 13ms with the original code and 14ms with the dynamic code. Not exactly a serious performance bottleneck. One thing to remember is that Microsoft optimized the DLR code significantly so that repeated calls to the same operations are routed very efficiently which actually makes for very fast evaluation. The bottom line for performance with dynamic code is: Make sure you test and profile your code if you think that there might be a performance issue. However, in my experience with dynamic types so far performance is pretty good for repeated operations (ie. in loops). While usually a little slower the perf hit is a lot less typically than equivalent Reflection work. Although the code in the second example looks like standard object syntax, dynamic is not static code. It's evaluated at runtime and so there's no type recognition until runtime. This means no Intellisense at development time, and any invalid references that call into 'properties' (ie. fields in the DataRow) that don't exist still cause runtime errors. So in the case of the data row you still get a runtime error if you mistype a column name:// this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); Dynamic - Lots of uses The arrival of Dynamic types in .NET has been met with mixed emotions. Die hard .NET developers decry dynamic types as an abomination to the language. After all what dynamic accomplishes goes against all that a static language is supposed to provide. On the other hand there are clearly scenarios when dynamic can make life much easier (COM Interop being one place). Think of the possibilities. What other data structures would you like to expose to a simple property interface rather than some sort of collection or dictionary? And beyond what I showed here you can also implement 'Method missing' behavior on objects with InvokeMember which essentially allows you to create dynamic methods. It's all very flexible and maybe just as important: It's easy to do. There's a lot of power hidden in this seemingly simple interface. Your move…© Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  .NET   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Javascript How to automatically change word when click without need to refresh browser.?

    - by Fakhrul Zakry
    im quite lost here and not really expert about javascript. I want to change the content when user click with "Thanks for vote" automatically without need to refresh the page. Here is my html: {% if poll.privacy == "own" and request.user.get_profile.parliment != poll.location %} You do not have permission to vote this. {% else %} {% if has_vote %} {% if poll.rating_option == '1to5' %} <div class="rate"> <div id="poll-rate-{{ poll.pk }}"></div> </div> {% else %} Thanks for your vote. {% endif %} {% else %} {% if poll.rating_option == 'yes_no' %} <a href="javascript:void(0)" class="rate btn btn-xs btn-success mr5 vote-positive" rel="{% url 'vote_vote' poll.pk 1 %}" alt="{{ poll.pk }}">Yes</a> <a href="javascript:void(0)" class="rate btn btn-xs btn-danger vote-negative" rel="{% url 'vote_vote' poll.pk 0 %}" alt="{{ poll.pk }}">No</a> {% elif poll.rating_option == 'like_dislike' %} <a href="javascript:void(0)" class="rate btn btn-xs btn-success mr5 vote-positive" rel="{% url 'vote_vote' poll.pk 1 %}" alt="{{ poll.pk }}">Like</a> <a href="javascript:void(0)" class="rate btn btn-xs btn-danger vote-negative" rel="{% url 'vote_vote' poll.pk 0 %}" alt="{{ poll.pk }}">Dislike</a> {% elif poll.rating_option == '1to5' %} <div class="rate"> <div id="poll-rate-{{ poll.pk }}"></div> </div> {% endif %} {% endif %} {% endif %} and here is my javascript: function bindVoteHandler() { $('a.vote-positive, a.vote-negative').click(function(event) { event.preventDefault(); var link = $(this).attr('rel'); var poll_pk = $(this).attr('alt'); var selected_div = $(this).parent('div'); selected_div.html('<img src="{{ STATIC_URL }}img/loading_small.gif" />'); $.ajax(link).done(function( data ) { var result_div = $('div#vote-result-'+poll_pk); result_div.html(data); result_div.removeClass('vote-result-grey-out'); selected_div.html('<small>Thanks for your vote.</small>'); }); }); }; did anyone know what is the problem why i need to refresh my page after Like/Vote/rate to make it become (Thanks For your vote) ? please someone know help or share link with me. Below is the image: before click Like: after click Like: then when refreshed the word just displayed, it supposed automatically display when click Like. Thank you in advance..

    Read the article

  • Java - Regex problem

    - by Yatendra Goel
    I have a list of urls of type http://www.abc.com/pk/ca and http://www.abc.com/pk Now, I want to find out only those urls that ends with /pk or /pk/ and don't have anything in between .com and /pk

    Read the article

  • Java - Regex problem

    - by Yatendra Goel
    I have list of urls of types: http://www.abc.com/pk/etc http://www.abc.com/pk/etc/ http://www.abc.com/pk/etc/etc where etc can be anything. So I want to search only those urls that contains www.abc.com/pk/etc or www.abc.com/pk/etc/

    Read the article

  • Are these tables respect the 3NF Database Normalization?

    - by penas
    AUTHOR table Author_ID, PK First_Name Last_Name TITLES table TITLE_ID, PK NAME Author_ID, FK DOMAIN table DOMAIN_ID, PK NAME TITLE_ID, FK READERS table READER_ID, PK First_Name Last_Name ADDRESS CITY_ID, FK PHONE CITY table CITY_ID, PK NAME BORROWING table BORROWING_ID,pk READER_ID, fk TITLE_ID, fk DATE HISTORY table READER_ID TITLE_ID DATE_OF_BORROWING DATE_OF_RETURNING Are these tables respect the 3NF Database Normalization? What if 2 authors work together for the same title? The column Addresss should have it's own table? When a reader borrows a book, I make an entry in BORROWING table. After he returns the book, I delete that entry and I make another one entry in HISTORY table. Is this a good idea? Do I brake any rule? Should I have instead one single BORROWING table with a DATE_OF_RETURNING column?

    Read the article

  • NHibernate - Mapping tagging of entities

    - by ZNS
    Hi! I've been wrecking my mind on how to get my tagging of entities to work. I'll get right into some database structuring: tblTag TagId - int32 - PK Name tblTagEntity TagId - PK EntityId - PK EntityType - string - PK tblImage ImageId - int32 - PK tblBlog BlogId - int32 - PK class Image Id EntityType { get { return "MyNamespace.Entities.Image"; } IList Tags; class Blog Id EntityType { get { return "MyNamespace.Entities.Blog"; } IList Tags; The obvious problem I have here is that EntityType is an identifer but doesn't exist in the database. If anyone could help with the this mapping I'd be very grateful.

    Read the article

  • Database table design vs. ease of use.

    - by Gastoni
    I have a table with 3 fields: color, fruit, date. I can pick 1 fruit and 1 color, but I can do this only once each day. examples: red, apple, monday red, mango, monday blue, apple, monday blue, mango, monday red, apple, tuesday The two ways in which I could build the table are: 1.- To have color, fruit and date be a composite primary key (PK). This makes it easy to insert data into the table because all the validation needed is done by the database. PK color PK fruit PK date 2.- Have and id column set as PK and then all the other fields. Many say thats the way it should be, because composite PKs are evil. For example, CakePHP does no support them. PK id color fruit date Both have advantages. Which would be the 'better' approach?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >