Search Results

Search found 39181 results on 1568 pages for 't sql programming'.

Page 346/1568 | < Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >

  • ????????PL/SQL ??????Tips

    - by Yusuke.Yamamoto
    ????? ??:2005/05/02 ??:?????? PL/SQL ????????????????·??????·???????????? PL/SQL ?? ?????????/ DBMS_OUTPUT?UTL_FILE?DBMS_SQL?DBMS_PIPE?DBMS_ALERT?DBMS_JOB?DBMS_UTILITY?DBMS_LOCK.SLEEP?DBMS_RANDOM?DBMS_OBFUSCATION_TOOLKIT?DBMS_CRYPTO?UTL_SMTP?UTL_MAIL ?????/?????? ?????? PL/SQL ????? PL/SQL ??????? ????????? ????????????????? http://otndnld.oracle.co.jp/tech/pl_sql/pdf/plsql_tips10g.pdf(10.1.0 ???) http://otndnld.oracle.co.jp/tech/db_connect/pdf/PLSQL_TECH.pdf(8.0.5 ???)

    Read the article

  • How to build a setup package for a desktop application using SQL CE 3.5 and Entity Framework?

    - by Emad
    I am having a WPF desktop application that uses SQL CE (compact edition 3.5 ) and using the Entity Framework as a Datalayer. As It turned out in (http://support.microsoft.com/default.aspx?scid=kb;en-us;958478&sd=rss&spid=2855 ) and (http://social.msdn.microsoft.com/Forums/en-US/sqlce/thread/b6bac277-cf66-4c74-a0b3-e48abedbd161/ ) There Is some problem with the Entity Framework and SQL CE and I had to get the hotfix (basically a new build of the SQL CE called 3.5.1) My problem now is how to build a setup package in order to make it work in case some users already had the SQL CE 3.5 installed on their machines? I have included the DLLs directly in the application but when the sql ce is installed, its DLLs are in the GAC and have precedence over the local ones and the application crashes. I need the build a setup that would work "even if" the user already had the old buggy version.

    Read the article

  • How to register System.DirectoryServices for use in SQL CLR User Functions?

    - by Saul Dolgin
    I am porting an old 32-bit COM component that was written in VB6 for the purpose of reading and writing to an Active Directory server. The new solution will be in C# and will use SQL CLR user functions. The assembly that I am trying to deploy to SQL Server contains a reference to System.DirectoryServices. The project does compile without any errors but I am unable to deploy the assembly to the SQL Server because of the following error: Error: Assembly 'system.directoryservices, version=2.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a.' was not found in the SQL catalog. What are the correct steps for registering System.DirectoryServices on SQL Server?

    Read the article

  • Linq to SQL Azure genrating Error "Specified cast is not valid."

    - by Rabbi
    B"H I have an application that has been working for months using Linq to SQL connecting to a SQLExpress. I tried migrating it to SQL Azure. I copied the structure and data using the Sync Framework. I viewed the data in SQL Azure using SSMS 2008 R2 and it seams to be exactly what I have in my Sql Server. However when I try to use Linq to SQL against it I get an error "Specified cast is not valid." I seams to be happening any time I get child records. i.e. whenever I fill (the first time I access) an entity set. It seams to be happening after the data returns and when Linq tries to put it into the objects. remember, the application is working perfectly against sqlexpress, even when accessed across the internet or vpn.

    Read the article

  • Linq to SQL Azure generating Error "Specified cast is not valid."

    - by Rabbi
    B"H I have an application that has been working for months using Linq to SQL connecting to a SQLExpress. I tried migrating it to SQL Azure. I copied the structure and data using the Sync Framework. I viewed the data in SQL Azure using SSMS 2008 R2 and it seams to be exactly what I have in my Sql Server. However when I try to use Linq to SQL against it I get an error "Specified cast is not valid." I seams to be happening any time I get child records. i.e. whenever I fill (the first time I access) an entity set. It seams to be happening after the data returns and when Linq tries to put it into the objects. Remember, the application is working perfectly against sqlexpress, even when accessed across the internet or vpn.

    Read the article

  • In SQL Server, can multiple inserts be replaced with a single insert that takes an XML parameter?

    - by Mayo
    So I have an existing ASP.NET solution that uses LINQ-to-SQL to insert data into SQL Server (5 tables, 110k records total). I had read in the past that XML could be passed as a parameter to SQL Server but my google searches turn up results that store the XML directly into a table. I would rather take that XML parameter and insert the nodes as records. Is this possible? How is it done (i.e. how is the XML parameter used to insert records in T-SQL, how should the XML be formatted)? Note: I'm researching other options like SQL bulk copy and I know that SSIS would be a good alternative. I want to know if this XML approach is feasible.

    Read the article

  • How can I avoid setting some columns if others haven't changed, when working with Linq To SQL?

    - by Patrick Szalapski
    In LINQ to SQL, I want to avoid setting some columns if others haven't changed? Say I have dim row = (From c in dataContext.Customers Where c.Id = 1234 Select c).Single() row.Name = "Example" ' line 3 dataContext.SubmitChanges() ' line 4 Great, so LINQ to SQL fetches a row, sets the name to "Example" in memory, and generates an update SQL query only when necessary--that is, no SQL will be generated if the customer's name was already "Example". So suppose on line 3, I want to detect if row has changed, and if so, set row.UpdateDate = DateTime.Now. If row has not changed, I don't want to set row.UpdateDate so that no SQL is generated. Is there any good way to do this?

    Read the article

  • Why does SQL Server 2000 treat SELECT test.* and SELECT t.est.* the same?

    - by Chris Pebble
    I butter-fingered a query in SQL Server 2000 and added a period in the middle of the table name: SELECT t.est.* FROM test Instead of: SELECT test.* FROM test And the query still executed perfectly. Even SELECT t.e.st.* FROM test executes without issue. I've tried the same query in SQL Server 2008 where the query fails (error: the column prefix does not match with a table name or alias used in the query). For reasons of pure curiosity I have been trying to figure out how SQL Server 2000 handles the table names in a way that would allow the butter-fingered query to run, but I haven't had much luck so far. Any sql gurus know why SQL Server 2000 ran the query without issue? Update: The query appears to work regardless of the interface used (e.g. Enterprise Manager, SSMS, OSQL) and as Jhonny pointed out below it bizarrely even works when you try: SELECT TOP 1000 dbota.ble.* FROM dbo.table

    Read the article

  • sql server 2008 express. cannot create mdf databse.

    - by yair
    Hi, I'm working with visual studio 2008 express and sql 2008 express. I'm trying to create mdf database (Add new Item ...)and I recieve the following error message: "a network-related or instance-specific error occured while establishing a connection to SQL Server. The server was not found or was not accessible. Verfy that instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named pipes Provider:40 - could not open connection to SQL)" Why ? Is it possible to create mdf database with SQL 2008 Express ? If it is then what to do ?

    Read the article

  • How can we improve overall Programmer Education & Training?

    - by crosenblum
    Last week, I was just viewing this amazing interview by Kevin Rose of Phillip Rosedale, of Second Life. And they had an amazing discussion about how to find, hire and identify good programmer's, and how hard it is to find good ones. Which has lead me to really think about the way we programmer's learn, are taught. For a majority of us, myself included, we are self-taught. Which is great about being a programmer, anyone can learn and develop skills. But this also means, that there is no real standards of what a good programmer is/are, and what kind of environment's encourage the growth of programming skills. This isn't so much a question, but just a desire in me, to see how we can change the culture of programming, and the manager's of programming, so that education and self-improvement is encouraged. There are a lot of avenue's for continued education, youtube videos, books, conferences, but because of the experiental nature of what we do, it isn't always clear what's important to learn and to master. Let's look at the The Joel 12 Steps. The Joel Test Do you use source control? Can you make a build in one step? Do you make daily builds? Do you have a bug database? Do you fix bugs before writing new code? Do you have an up-to-date schedule? Do you have a spec? Do programmers have quiet working conditions? Do you use the best tools money can buy? Do you have testers? Do new candidates write code during their interview? Do you do hallway usability testing? I think all of these have important value, but because of something I call the Experiential Gap, if a programmer or manager has never experienced any of the negative consequences for not having done items on the list, they will never see the need to do any of them. The Experiental Gap, is my basic theory, that each of us has different jobs and different experiences. So for some of us, that have always worked with dozens of programmer's, source control is a must have. But for people who have always been the only programmer, they can not imagine the need for source control. And it's because of this major flaw in how we learn, that we evaluate people by what best practices they do or not do, and the reason for either can start a flame war. We always evaluate people in our field by what they do, and think "Oh if this guy/gal isn't doing xyz best practice, he/she can't be a good programmer, so let's not waste time or energy talking to them." This is exactly why we have so many programming flame wars, that it becomes, because of the Experiental Gap, we can't imagine people not having made the decisions that we have had to made. So this has lead me to think, that we totally need to rethink how we train, educate and manage programmer's. For example, what percentage of you have had encouragement by your manager's to go to conferences, and even have them pay for it? For me, and a lot of people, this is extremely rare, a lot of us would love to go to conferences, to learn more, but the money ain't there to do that. So the point of this question is really to spark a lot of how can we train, learn and manage better? How can we create a new culture of learning that doesn't insult people for not having the same job experiences. Yes we all have jobs and work to do, but our ability to do our jobs well, depends on our desire, interest and support in improving our mastery of our skills. Right now, I see our culture being rather disorganized, we support the elite, but those tons of us that want to get better, just don't have enough support to learn and improve ourselves. I mean, do we as an industry, want to be perceived as just replaceable cogs? Thank you...

    Read the article

  • Inequality joins, Asynchronous transformations and Lookups : SSIS

    - by jamiet
    It is pretty much accepted by SQL Server Integration Services (SSIS) developers that synchronous transformations are generally quicker than asynchronous transformations (for a description of synchronous and asynchronous transformations go read Asynchronous and synchronous data flow components). Notice I said “generally” and not “always”; there are circumstances where using asynchronous transformations can be beneficial and in this blog post I’ll demonstrate such a scenario, one that is pretty common when building data warehouses. Imagine I have a [Customer] dimension table that manages information about all of my customers as a slowly-changing dimension. If that is a type 2 slowly changing dimension then you will likely have multiple rows per customer in that table. Furthermore you might also have datetime fields that indicate the effective time period of each member record. Here is such a table that contains data for four dimension members {Terry, Max, Henry, Horace}: Notice that we have multiple records per customer and that the [SCDStartDate] of a record is equivalent to the [SCDEndDate] of the record that preceded it (if there was one). (Note that I am on record as saying I am not a fan of this technique of storing an [SCDEndDate] but for the purposes of clarity I have included it here.) Anyway, the idea here is that we will have some incoming data containing [CustomerName] & [EffectiveDate] and we need to use those values to lookup [Customer].[CustomerId]. The logic will be: Lookup a [CustomerId] WHERE [CustomerName]=[CustomerName] AND [SCDStartDate] <= [EffectiveDate] AND [EffectiveDate] <= [SCDEndDate] The conventional approach to this would be to use a full cached lookup but that isn’t an option here because we are using inequality conditions. The obvious next step then is to use a non-cached lookup which enables us to change the SQL statement to use inequality operators: Let’s take a look at the dataflow: Notice these are all synchronous components. This approach works just fine however it does have the limitation that it has to issue a SQL statement against your lookup set for every row thus we can expect the execution time of our dataflow to increase linearly in line with the number of rows in our dataflow; that’s not good. OK, that’s the obvious method. Let’s now look at a different way of achieving this using an asynchronous Merge Join transform coupled with a Conditional Split. I’ve shown it post-execution so that I can include the row counts which help to illustrate what is going on here: Notice that there are more rows output from our Merge Join component than on the input. That is because we are joining on [CustomerName] and, as we know, we have multiple records per [CustomerName] in our lookup set. Notice also that there are two asynchronous components in here (the Sort and the Merge Join). I have embedded a video below that compares the execution times for each of these two methods. The video is just over 8minutes long. View on Vimeo  For those that can’t be bothered watching the video I’ll tell you the results here. The dataflow that used the Lookup transform took 36 seconds whereas the dataflow that used the Merge Join took less than two seconds. An illustration in case it is needed: Pretty conclusive proof that in some scenarios it may be quicker to use an asynchronous component than a synchronous one. Your mileage may of course vary. The scenario outlined here is analogous to performance tuning procedural SQL that uses cursors. It is common to eliminate cursors by converting them to set-based operations and that is effectively what we have done here. Our non-cached lookup is performing a discrete operation for every single row of data, exactly like a cursor does. By eliminating this cursor-in-disguise we have dramatically sped up our dataflow. I hope all of that proves useful. You can download the package that I demonstrated in the video from my SkyDrive at http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100514/20100514%20Lookups%20and%20Merge%20Joins.zip Comments are welcome as always. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 3)

    - by Sadequl Hussain
    Continuing from Part 2 of the Database Migration Checklist series: Step 10: Full-text catalogs and full-text indexing This is one area of SQL Server where people do not seem to take notice unless something goes wrong. Full-text functionality is a specialised area in database application development and is not usually implemented in your everyday OLTP systems. Nevertheless, if you are migrating a database that uses full-text indexing on one or more tables, you need to be aware a few points. First of all, SQL Server 2005 now allows full-text catalog files to be restored or attached along with the rest of the database. However, after migration, if you are unable to look at the properties of any full-text catalogs, you are probably better off dropping and recreating it. You may also get the following error messages along the way: Msg 9954, Level 16, State 2, Line 1 The Full-Text Service (msftesql) is disabled. The system administrator must enable this service. This basically means full text service is not running (disabled or stopped) in the destination instance. You will need to start it from the Configuration Manager. Similarly, if you get the following message, you will also need to drop and recreate the catalog and populate it. Msg 7624, Level 16, State 1, Line 1 Full-text catalog ‘catalog_name‘ is in an unusable state. Drop and re-create this full-text catalog. A full population of full-text indexes can be a time and resource intensive operation. Obviously you will want to schedule it for low usage hours if the database is restored in an existing production server. Also, bear in mind that any scheduled job that existed in the source server for populating the full text catalog (e.g. nightly process for incremental update) will need to be re-created in the destination. Step 11: Database collation considerations Another sticky area to consider during a migration is the collation setting. Ideally you would want to restore or attach the database in a SQL Server instance with the same collation. Although not used commonly, SQL Server allows you to change a database’s collation by using the ALTER DATABASE command: ALTER DATABASE database_name COLLATE collation_name You should not be using this command for no reason as it can get really dangerous.  When you change the database collation, it does not change the collation of the existing user table columns.  However the columns of every new table, every new UDT and subsequently created variables or parameters in code will use the new setting. The collation of every char, nchar, varchar, nvarchar, text or ntext field of the system tables will also be changed. Stored procedure and function parameters will be changed to the new collation and finally, every character-based system data type and user defined data types will also be affected. And the change may not be successful either if there are dependent objects involved. You may get one or multiple messages like the following: Cannot ALTER ‘object_name‘ because it is being referenced by object ‘dependent_object_name‘. That is why it is important to test and check for collation related issues. Collation also affects queries that use comparisons of character-based data.  If errors arise due to two sides of a comparison being in different collation orders, the COLLATE keyword can be used to cast one side to the same collation as the other. Continues…

    Read the article

  • Collision between sprites in game programming?

    - by Lyn Maxino
    I've since just started coding for an android game using eclipse. I've read Beginning Android Game Programming and various other e-books. Recently, I've encountered a problem with collision between sprites. I've used this code template for my program. package com.project.CAI_test; import java.util.Random; import android.graphics.Bitmap; import android.graphics.Canvas; import android.graphics.Rect; public class Sprite { // direction = 0 up, 1 left, 2 down, 3 right, // animation = 3 back, 1 left, 0 front, 2 right int[] DIRECTION_TO_ANIMATION_MAP = { 3, 1, 0, 2 }; private static final int BMP_ROWS = 4; private static final int BMP_COLUMNS = 3; private static final int MAX_SPEED = 5; private GameView gameView; private Bitmap bmp; private int x = 0; private int y = 0; private int xSpeed; private int ySpeed; private int currentFrame = 0; private int width; private int height; public Sprite(GameView gameView, Bitmap bmp) { this.width = bmp.getWidth() / BMP_COLUMNS; this.height = bmp.getHeight() / BMP_ROWS; this.gameView = gameView; this.bmp = bmp; Random rnd = new Random(); x = rnd.nextInt(gameView.getWidth() - width); y = rnd.nextInt(gameView.getHeight() - height); xSpeed = rnd.nextInt(MAX_SPEED * 2) - MAX_SPEED; ySpeed = rnd.nextInt(MAX_SPEED * 2) - MAX_SPEED; } private void update() { if (x >= gameView.getWidth() - width - xSpeed || x + xSpeed <= 0) { xSpeed = -xSpeed; } x = x + xSpeed; if (y >= gameView.getHeight() - height - ySpeed || y + ySpeed <= 0) { ySpeed = -ySpeed; } y = y + ySpeed; currentFrame = ++currentFrame % BMP_COLUMNS; } public void onDraw(Canvas canvas) { update(); int srcX = currentFrame * width; int srcY = getAnimationRow() * height; Rect src = new Rect(srcX, srcY, srcX + width, srcY + height); Rect dst = new Rect(x, y, x + width, y + height); canvas.drawBitmap(bmp, src, dst, null); } private int getAnimationRow() { double dirDouble = (Math.atan2(xSpeed, ySpeed) / (Math.PI / 2) + 2); int direction = (int) Math.round(dirDouble) % BMP_ROWS; return DIRECTION_TO_ANIMATION_MAP[direction]; } public boolean isCollition(float x2, float y2) { return x2 > x && x2 < x + width && y2 > y && y2 < y + height; } } The above code only detects collision between the generated sprites and the surface border. What I want to achieve is a collision detection that is controlled by the update function without having to change much of the coding. Probably several lines placed in the update() function. Tnx for any comment/suggestion.

    Read the article

  • Modifying Contiguous Time Periods in a History Table

    Alex Kuznetsov is credited with a clever technique for creating a history table for SQL that is designed to store contiguous time periods and check that these time periods really are contiguous, using nothing but constraints. This is now increasingly useful with the DATE data type in SQL Server. The modification of data in this type of table isn't always entirely intuitive so Alex is on hand to give a brief explanation of how to do it.

    Read the article

  • SQL 2012 Licensing Thoughts

    - by Geoff N. Hiten
    The only thing more controversial than new Federal Tax plans is new Licensing plans from Microsoft.  In both cases, everyone calculates several numbers.  First, will I pay more or less under this plan?  Second, will my competition pay more or less than now?  Third, will <insert interesting person/company here> pay more or less?  Not that items 2 and 3 are meaningful, that is just how people think. Much like tax plans, the devil is in the details, so lets see how this looks.  Microsoft shows it here: http://www.microsoft.com/sqlserver/en/us/future-editions/sql2012-licensing.aspx First up is a switch from per-socket to per-core licensing.  Anyone who didn’t see something like this coming should rapidly search for a new line of work because you are not paying attention.  The explosion of multi-core processors has made SQL Server a bargain.  Microsoft is in business to make money and the old per-socket model was not going to do that going forward. Per-core licensing also simplifies virtualization licensing.  Physical Core = Virtual Core, at least for licensing.  Oversubscribe your processors, that’s your lookout.  You still pay for  what is exposed to the VM.  The cool part is you can seamlessly move physical and virtual workloads around and the licenses follow.  The catch is you have to have Software Assurance to make the licenses mobile.  Nice touch there. Let’s have a moment of silence for the late, unlamented, largely ignored Workgroup Edition.  To quote the Microsoft  FAQ:  “Standard becomes our sole edition for basic database needs”.  Considering I haven’t encountered a singe instance of SQL Server Workgroup Edition in the wild, I don’t think this will be all that controversial. As for pricing, it looks like a wash with current per-socket pricing based on four core sockets.  Interestingly, that is the minimum core count Microsoft proposes to swap to transition per-socket to per-core if you are on Software Assurance.  Reading the fine print shows that if you are using more, you will get more core licenses: From the licensing FAQ. 15. How do I migrate from processor licenses to core licenses?  What is the migration path? Licenses purchased with Software Assurance (SA) will upgrade to SQL Server 2012 at no additional cost. EA/EAP customers can continue buying processor licenses until your next renewal after June 30, 2012. At that time, processor licenses will be exchanged for core-based licenses sufficient to cover the cores in use by processor-licensed databases (minimum of 4 cores per processor for Standard and Enterprise, and minimum of 8 EE cores per processor for Datacenter). Looks like the folks who invested in the AMD 12-core chips will make out like bandits. Now, on to something new: SQL Server Business Intelligence Edition. Yep, finally a BI-specific SKU licensed for server+CAL configurations only.  Note that Enterprise Edition still supports the complete feature set; the BI Edition is intended for smaller shops who want to use the full BI feature set but without needing Enterprise Edition scale (or costs).  No, you don’t get ColumnStore, Compression, or Partitioning in the BI Edition.  Those are Enterprise scale features, ThankYouVeryMuch.  Then again, your starting licensing costs are about one sixth of an Enterprise Edition system (based on an 8 core server). The only part of the message I am missing is if the current Failover Licensing Policy will change.  Do we need to fully or partially license failover servers?  That is a detail I definitely want to know.

    Read the article

  • Reading graph inputs for a programming puzzle and then solving it

    - by Vrashabh
    I just took a programming competition question and I absolutely bombed it. I had trouble right at the beginning itself from reading the input set. The question was basically a variant of this puzzle http://codercharts.com/puzzle/evacuation-plan but also had an hour component in the first line(say 3 hours after start of evacuation). It reads like this This puzzle is a tribute to all the people who suffered from the earthquake in Japan. The goal of this puzzle is, given a network of road and locations, to determine the maximum number of people that can be evacuated. The people must be evacuated from evacuation points to rescue points. The list of road and the number of people they can carry per hour is provided. Input Specifications Your program must accept one and only one command line argument: the input file. The input file is formatted as follows: the first line contains 4 integers n r s t n is the number of locations (each location is given by a number from 0 to n-1) r is the number of roads s is the number of locations to be evacuated from (evacuation points) t is the number of locations where people must be evacuated to (rescue points) the second line contains s integers giving the locations of the evacuation points the third line contains t integers giving the locations of the rescue points the r following lines contain to the road definitions. Each road is defined by 3 integers l1 l2 width where l1 and l2 are the locations connected by the road (roads are one-way) and width is the number of people per hour that can fit on the road Now look at the sample input set 5 5 1 2 3 0 3 4 0 1 10 0 2 5 1 2 4 1 3 5 2 4 10 The 3 in the first line is the additional component and is defined as the number of hours since the resuce has started which is 3 in this case. Now my solution was to use Dijisktras algorithm to find the shortest path between each of the rescue and evac nodes. Now my problem started with how to read the input set. I read the first line in python and stored the values in variables. But then I did not know how to store the values of the distance between the nodes and what DS to use and how to input it to say a standard implementation of dijikstras algorithm. So my question is two fold 1.) How do I take the input of such problems? - I have faced this problem in quite a few competitions recently and I hope I can get a simple code snippet or an explanation in java or python to read the data input set in such a way that I can input it as a graph to graph algorithms like dijikstra and floyd/warshall. Also a solution to the above problem would also help. 2.) How to solve this puzzle? My algorithm was: Find shortest path between evac points (in the above example it is 14 from 0 to 3) Multiply it by number of hours to get maximal number of saves Also the answer given for the variant for the input set was 24 which I dont understand. Can someone explain that also. UPDATE: I get how the answer is 14 in the given problem link - it seems to be just the shortest path between node 0 and 3. But with the 3 hour component how is the answer 24 UPDATE I get how it is 24 - its a complete graph traversal at every hour and this is how I solve it Hour 1 Node 0 to Node 1 - 10 people Node 0 to Node 2- 5 people TotalRescueCount=0 Node 1=10 Node 2= 5 Hour 2 Node 1 to Node 3 = 5(Rescued) Node 2 to Node 4 = 5(Rescued) Node 0 to Node 1 = 10 Node 0 to Node 2 = 5 Node 1 to Node 2 = 4 TotalRescueCount = 10 Node 1 = 10 Node 2= 5+4 = 9 Hour 3 Node 1 to Node 3 = 5(Rescued) Node 2 to Node 4 = 5+4 = 9(Rescued) TotalRescueCount = 9+5+10 = 24 It hard enough for this case , for multiple evac and rescue points how in the world would I write a pgm for this ?

    Read the article

  • Exclamation Marks in a Query SQL

    - by mikeabyss
    I'm reading over this query, and I came upon a line where I don't understand heres the line [FETT List]![FETT Search] FETT List is a table And FETT Search is a column in FETT List Can someone explain what the exclamation mark means? Thanks

    Read the article

< Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >