Search Results

Search found 26878 results on 1076 pages for 'temp table'.

Page 199/1076 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • How To - Guide to Importing Data from a MySQL Database to Excel using MySQL for Excel

    - by Javier Treviño
    Fetching data from a database to then get it into an Excel spreadsheet to do analysis, reporting, transforming, sharing, etc. is a very common task among users. There are several ways to extract data from a MySQL database to then import it to Excel; for example you can use the MySQL Connector/ODBC to configure an ODBC connection to a MySQL database, then in Excel use the Data Connection Wizard to select the database and table from which you want to extract data from, then specify what worksheet you want to put the data into.  Another way is to somehow dump a comma delimited text file with the data from a MySQL table (using the MySQL Command Line Client, MySQL Workbench, etc.) to then in Excel open the file using the Text Import Wizard to attempt to correctly split the data in columns. These methods are fine, but involve some degree of technical knowledge to make the magic happen and involve repeating several steps each time data needs to be imported from a MySQL table to an Excel spreadsheet. So, can this be done in an easier and faster way? With MySQL for Excel you can. MySQL for Excel features an Import MySQL Data action where you can import data from a MySQL Table, View or Stored Procedure literally with a few clicks within Excel.  Following is a quick guide describing how to import data using MySQL for Excel. This guide assumes you already have a working MySQL Server instance, Microsoft Office Excel 2007 or 2010 and MySQL for Excel installed. 1. Opening MySQL for Excel Being an Excel Add-In, MySQL for Excel is opened from within Excel, so to use it open Excel, go to the Data tab located in the Ribbon and click MySQL for Excel at the far right of the Ribbon. 2. Creating a MySQL Connection (may be optional) If you have MySQL Workbench installed you will automatically see the same connections that you can see in MySQL Workbench, so you can use any of those and there may be no need to create a new connection. If you want to create a new connection (which normally you will do only once), in the Welcome Panel click New Connection, which opens the Setup New Connection dialog. Here you only need to give your new connection a distinctive Connection Name, specify the Hostname (or IP address) where the MySQL Server instance is running on (if different than localhost), the Port to connect to and the Username for the login. If you wish to test if your setup is good to go, click Test Connection and an information dialog will pop-up stating if the connection is successful or errors were found. 3.Opening a connection to a MySQL Server To open a pre-configured connection to a MySQL Server you just need to double-click it, so the Connection Password dialog is displayed where you enter the password for the login. 4. Selecting a MySQL Schema After opening a connection to a MySQL Server, the Schema Selection Panel is shown, where you can select the Schema that contains the Tables, Views and Stored Procedures you want to work with. To do so, you just need to either double-click the desired Schema or select it and click Next >. 5. Importing data… All previous steps were really the basic minimum needed to drill-down to the DB Object Selection Panel  where you can see the Database Objects (grouped by type: Tables, Views and Procedures in that order) that you want to perform actions against; in the case of this guide, the action of importing data from them. a. From a MySQL Table To import from a Table you just need to select it from the list of Database Objects’ Tables group, after selecting it you will note actions below the list become available; then click Import MySQL Data. The Import Data dialog is displayed; you can see some basic information here like the name of the Excel worksheet the data will be imported to (in the window title), the Table Name, the total Row Count and a 10 row preview of the data meant for the user to see the columns that the table contains and to provide a way to select which columns to import. The Import Data dialog is designed with defaults in place so all data is imported (all rows and all columns) by just clicking Import; this is important to minimize the number of clicks needed to get the job done. After the import is performed you will have the data in the Excel worksheet formatted automatically. If you need to override the defaults in the Import Data dialog to change the columns selected for import or to change the number of imported rows you can easily do so before clicking Import. In the screenshot below the defaults are overridden to import only the first 3 columns and rows 10 – 60 (Limit to 50 Rows and Start with Row 10). If the number of rows to be imported exceeds the maximum number of rows Excel can hold in its worksheet, a warning will be displayed in the dialog, meaning the imported number of rows will be limited by that maximum number (65,535 rows if the worksheet is in Compatibility Mode).  In the screenshot below you can see the Table contains 80,559 rows, but only 65,534 rows will be imported since the first row is used for the column names if the Include Column Names as Headers checkbox is checked. b. From a MySQL View Similar to the way of importing from a Table, to import from a View you just need to select it from the list of Database Objects’ Views group, then click Import MySQL Data. The Import Data dialog is displayed; identically to the way everything looks when importing from a table, the dialog displays the View Name, the total Row Count and the data preview grid. Since Views are really a filtered way to display data from Tables, it is actually as if we are extracting data from a Table; so the Import Data dialog is actually identical for those 2 Database Objects. After the import is performed, the data in the Excel spreadsheet looks like the following screenshot. Note that you can override the defaults in the Import Data dialog in the same way described above for importing data from Tables. Also the Compatibility Mode warning will be displayed if data exceeds the maximum number of rows explained before. c. From a MySQL Procedure Too import from a Procedure you just need to select it from the list of Database Objects’ Procedures group (note you can see Procedures here but not Functions since these return a single value, so by design they are filtered out). After the selection is made, click Import MySQL Data. The Import Data dialog is displayed, but this time you can see it looks different to the one used for Tables and Views.  Given the nature of Store Procedures, they require first that values are supplied for its Parameters and also Procedures can return multiple Result Sets; so the Import Data dialog shows the Procedure Name and the Procedure Parameters in a grid where their values are input. After you supply the Parameter Values click Call. After calling the Procedure, the Result Sets returned by it are displayed at the bottom of the dialog; output parameters and the return value of the Procedure are appended as the last Result Set of the group. You can see each Result Set is displayed as a tab so you can see a preview of the returned data.  You can specify if you want to import the Selected Result Set (default), All Result Sets – Arranged Horizontally or All Result Sets – Arranged Vertically using the Import drop-down list; then click Import. After the import is performed, the data in the Excel spreadsheet looks like the following screenshot.  Note in this example all Result Sets were imported and arranged vertically. As you can see using MySQL for Excel importing data from a MySQL database becomes an easy task that requires very little technical knowledge, so it can be done by any type of user. Hope you enjoyed this guide! Remember that your feedback is very important for us, so drop us a message: MySQL on Windows (this) Blog - https://blogs.oracle.com/MySqlOnWindows/ Forum - http://forums.mysql.com/list.php?172 Facebook - http://www.facebook.com/mysql Cheers!

    Read the article

  • Python PyBluez loses Bluetooth connection after a while

    - by Travis G.
    I am using Python to write a simple serial Bluetooth script that sends information about my computer stats periodically. The receiving device is a Sparkfun BlueSmirf Silver. The problem is that, after the script runs for a few minutes, it stops sending packets to the receiver and fails with the error: (11, 'Resource temporarily unavailable') Noticing that this inevitably happens, I added some code to automatically try to reopen the connection. However, then I get: Could not connect: (16, 'Device or resource busy') Am I doing something wrong with the connection? Do I need to occasionally reopen the socket? I'm not sure how to recover from this type of error. I understand that sometimes the port will be busy and a write operation is deferred to avoid blocking other processes, but I wouldn't expect the connection to fail so regularly. Any thoughts? Here is the script: import psutil import serial import string import time import bluetooth sampleTime = 1 numSamples = 5 lastTemp = 0 TEMP_CHAR = 't' USAGE_CHAR = 'u' SENSOR_NAME = 'TC0D' #gauges = serial.Serial() #gauges.port = '/dev/rfcomm0' #gauges.baudrate = 9600 #gauges.parity = 'N' #gauges.writeTimeout = 0 #gauges.open() filename = '/sys/bus/platform/devices/applesmc.768/temp2_input' def parseSensorsOutputLinux(output): return int(round(float(output) / 1000)) def connect(): while(True): try: gaugeSocket = bluetooth.BluetoothSocket(bluetooth.RFCOMM) gaugeSocket.connect(('00:06:66:42:22:96', 1)) break; except bluetooth.btcommon.BluetoothError as error: print "Could not connect: ", error, "; Retrying in 5s..." time.sleep(5) return gaugeSocket; gaugeSocket = connect() while(1): usage = psutil.cpu_percent(interval=sampleTime) sensorFile = open(filename) temp = parseSensorsOutputLinux(sensorFile.read()) try: #gauges.write(USAGE_CHAR) gaugeSocket.send(USAGE_CHAR) #gauges.write(chr(int(usage))) #write the first byte gaugeSocket.send(chr(int(usage))) #print("Wrote usage: " + str(int(usage))) #gauges.write(TEMP_CHAR) gaugeSocket.send(TEMP_CHAR) #gauges.write(chr(temp)) gaugeSocket.send(chr(temp)) #print("Wrote temp: " + str(temp)) except bluetooth.btcommon.BluetoothError as error: print "Caught BluetoothError: ", error time.sleep(5) gaugeSocket = connect() pass gaugeSocket.close() EDIT: I should add that this code connects fine after I power-cycle the receiver and start the script. However, it fails after the first exception until I restart the receiver. P.S. This is related to my recent question, Why is /dev/rfcomm0 giving PySerial problems?, but that was more about PySerial specifically with rfcomm0. Here I am asking about general rfcomm etiquette.

    Read the article

  • Temporary Tables in Stored Procedures

    - by Paul White
    Ask anyone what the primary advantage of temporary tables over table variables is, and the chances are they will say that temporary tables support statistics and table variables do not. This is true, of course; even the indexes that enforce PRIMARY KEY and UNIQUE constraints on table variables do not have populated statistics associated with them, and it is not possible to manually create statistics or non-constraint indexes on table variables. Intuitively, then, any query that has alternative execution...(read more)

    Read the article

  • Removing hard-coded values and defensive design vs YAGNI

    - by Ben Scott
    First a bit of background. I'm coding a lookup from Age - Rate. There are 7 age brackets so the lookup table is 3 columns (From|To|Rate) with 7 rows. The values rarely change - they are legislated rates (first and third columns) that have stayed the same for 3 years. I figured that the easiest way to store this table without hard-coding it is in the database in a global configuration table, as a single text value containing a CSV (so "65,69,0.05,70,74,0.06" is how the 65-69 and 70-74 tiers would be stored). Relatively easy to parse then use. Then I realised that to implement this I would have to create a new table, a repository to wrap around it, data layer tests for the repo, unit tests around the code that unflattens the CSV into the table, and tests around the lookup itself. The only benefit of all this work is avoiding hard-coding the lookup table. When talking to the users (who currently use the lookup table directly - by looking at a hard copy) the opinion is pretty much that "the rates never change." Obviously that isn't actually correct - the rates were only created three years ago and in the past things that "never change" have had a habit of changing - so for me to defensively program this I definitely shouldn't store the lookup table in the application. Except when I think YAGNI. The feature I am implementing doesn't specify that the rates will change. If the rates do change, they will still change so rarely that maintenance isn't even a consideration, and the feature isn't actually critical enough that anything would be affected if there was a delay between the rate change and the updated application. I've pretty much decided that nothing of value will be lost if I hard-code the lookup, and I'm not too concerned about my approach to this particular feature. My question is, as a professional have I properly justified that decision? Hard-coding values is bad design, but going to the trouble of removing the values from the application seems to violate the YAGNI principle. EDIT To clarify the question, I'm not concerned about the actual implementation. I'm concerned that I can either do a quick, bad thing, and justify it by saying YAGNI, or I can take a more defensive, high-effort approach, that even in the best case ultimately has low benefits. As a professional programmer does my decision to implement a design that I know is flawed simply come down to a cost/benefit analysis?

    Read the article

  • Encrypting your SQL Server Passwords in Powershell

    - by laerte
    A couple of months ago, a friend of mine who is now bewitched by the seemingly supernatural abilities of Powershell (+1 for the team) asked me what, initially, appeared to be a trivial question: "Laerte, I do not have the luxury of being able to work with my SQL servers through Windows Authentication, and I need a way to automatically pass my username and password. How would you suggest I do this?" Given that I knew he, like me, was using the SQLPSX modules (an open source project created by Chad Miller; a fantastic library of reusable functions and PowerShell scripts), I merrily replied, "Simply pass the Username and Password in SQLPSX functions". He rather pointed responded: "My friend, I might as well pass: Username-'Me'-password 'NowEverybodyKnowsMyPassword'" As I do have the pleasure of working with Windows Authentication, I had not really thought this situation though yet (and thank goodness I only revealed my temporary ignorance to a friend, and the embarrassment was minimized). After discussing this puzzle with Chad Miller, he showed me some code for saving passwords on SQL Server Tables, which he had demo'd in his Powershell ETL session at Tampa SQL Saturday (and you can download the scripts from here). The solution seemed to be pretty much ready to go, so I showed it to my Authentication-impoverished friend, only to discover that we were only half-way there: "That's almost what I want, but the details need to be stored in my local txt file, together with the names of the servers that I'll actually use the Powershell scripts on. Something like: Server1,UserName,Password Server2,UserName,Password" I thought about it for just a few milliseconds (Ha! Of course I'm not telling you how long it actually took me, I have to do my own marketing, after all) and the solution was finally ready. First , we have to download Library-StringCripto (with many thanks to Steven Hystad), which is composed of two functions: One for encryption and other for decryption, both of which are used to manage the password. If you want to know more about the library, you can see more details in the help functions. Next, we have to create a txt file with your encrypted passwords:$ServerName = "Server1" $UserName = "Login1" $Password = "Senha1" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ServersSecurePassword.txt -Append $ServerName = "Server2" $UserName = "Login2" $Password = "senha2" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ ServersSecurePassword.txt -Append .And in the c:\temp\ServersSecurePassword.txt file which we've just created, you will find your Username and Password, all neatly encrypted. Let's take a look at what the txt looks like: .and in case you're wondering, Server names, Usernames and Passwords are all separated by commas. Decryption is actually much more simple:Read-EncryptedString -InputString $EncryptString -password "YourPassword" (Just remember that the Password you're trying to decrypt must be exactly the same as the encrypted phrase.) Finally, just to show you how smooth this solution is, let's say I want to use the Invoke-DBMaint function from SQLPSX to perform a checkdb on a system database: it's just a case of split, decrypt and be happy!Get-Content c:\temp\ServerSecurePassword.txt | foreach { [array] $Split = ($_).split(",") Invoke-DBMaint -server $($Split[0]) -UserName (Read-EncryptedString -InputString $Split[1] -password "YourPassword" ) -Password (Read-EncryptedString -InputString $Split[2] -password "YourPassword" ) -Databases "SYSTEM" -Action "CHECK_DB" -ReportOn c:\Temp } This is why I love Powershell.

    Read the article

  • Configurable tables in sql database

    - by dot
    I have the following tables in my database: Config Table: ====================================== Start_Range | End Range | Config_id 10 | 15 | 1 ====================================== Available_UserIDs ========================== ID | UserID | Used_YN | 1 | 10 | t | 1 | 11 | f | 1 | 12 | f | 1 | 13 | f | 1 | 14 | f | 1 | 15 | f | ========================== Users ========================== UserId | FName | LName | 10 |John | Doe | ========================== This is used in a reservation system of sorts... which lets an administrator specify a range of numbers that will be assigned to users in the config table. Once the range has been defined, the system then populates the Available_userIDs table with all the numbers in between the range, and sets the Used_YN flag to false As users sign up, they grab the next user_id number that's not in use... and reserve it. Then the system adds a record to the Users table. Once the admin has specified a range, it is possible that they can change it. For example, they can start with 10-15... and then when the range is used up, they should be able to specify another range like 16 - 99. I've put a unique constraint on the Available_UserIDs table, as well as on the Users table - to ensure that UserIds can't be duplicated. My questions are as follows: What's the best way to prevent the admins from using a range that's already in use? I thought of the following options: -- check either the Users table to see if the start range or ending range numbers are being used. If they are, assume that all the numbers in between are in use too, and reject the range. -- let them specify whatever they want, try to populate the Available_UserIDs table. If there are duplicates, just ignore that specific error message from the database and continue on. How do I find gaps in the number ranges? For example, if they specify 10-15, and then 20-25, it'd be nice to be able to somehow suggest on my web page that 16-19 is currently available. I found this article: http://stackoverflow.com/questions/1312101/how-to-find-a-gap-in-running-counter-with-sql But it only seems to return the first available number... so in my example above, it would only return the number 16. I'm sure there's a simpler way to do things that I'm overlooking!

    Read the article

  • Oracle Enterprise Manager 12c Ops Center Jump-Start for Partners

    - by Get_Specialized!
    Following the Normal 0 false false false EN-US X-NONE X-NONE Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} announcement at Oracle OpenWorld Tokyo, Partners can check out these resources to further learn about Oracle Enterprise Manager 12c Op Center and then use it to optimize your solution/services or offer new ones: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Product Documentation Oracle Technical Network Resources Online Learning Series for Partners in the OPN Enterprise Manager KnowledgeZone Whitepaper Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Making Infrastructure-as-a-Service in the Enterprise a Reality IDC report: Oracle Enterprise Manager 12c Embraces the Cloud with Integrated Lifecycle Management Follow-up webcast April 12th  Total Cloud Control for Systems Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Enterprise Manager Ops Center 12c is no extra charge and included in the support contract of Oracle Systems customers.To learn more see the Ops Center Everywhere Program And if you're not already a member, be sure and join the Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Enterprise Manager KnowledgeZone on the Oracle PartnerNetwork  Portal

    Read the article

  • Function Folding in #PowerQuery

    - by Darren Gosbell
    Originally posted on: http://geekswithblogs.net/darrengosbell/archive/2014/05/16/function-folding-in-powerquery.aspxLooking at a typical Power Query query you will noticed that it's made up of a number of small steps. As an example take a look at the query I did in my previous post about joining a fact table to a slowly changing dimension. It was roughly built up of the following steps: Get all records from the fact table Get all records from the dimension table do an outer join between these two tables on the business key (resulting in an increase in the row count as there are multiple records in the dimension table for each business key) Filter out the excess rows introduced in step 3 remove extra columns that are not required in the final result set. If Power Query was to execute a query like this literally, following the same steps in the same order it would not be overly efficient. Particularly if your two source tables were quite large. However Power Query has a feature called function folding where it can take a number of these small steps and push them down to the data source. The degree of function folding that can be performed depends on the data source, As you might expect, relational data sources like SQL Server, Oracle and Teradata support folding, but so do some of the other sources like OData, Exchange and Active Directory. To explore how this works I took the data from my previous post and loaded it into a SQL database. Then I converted my Power Query expression to source it's data from that database. Below is the resulting Power Query which I edited by hand so that the whole thing can be shown in a single expression: let     SqlSource = Sql.Database("localhost", "PowerQueryTest"),     BU = SqlSource{[Schema="dbo",Item="BU"]}[Data],     Fact = SqlSource{[Schema="dbo",Item="fact"]}[Data],     Source = Table.NestedJoin(Fact,{"BU_Code"},BU,{"BU_Code"},"NewColumn"),     LeftJoin = Table.ExpandTableColumn(Source, "NewColumn"                                   , {"BU_Key", "StartDate", "EndDate"}                                   , {"BU_Key", "StartDate", "EndDate"}),     BetweenFilter = Table.SelectRows(LeftJoin, each (([Date] >= [StartDate]) and ([Date] <= [EndDate])) ),     RemovedColumns = Table.RemoveColumns(BetweenFilter,{"StartDate", "EndDate"}) in     RemovedColumns If the above query was run step by step in a literal fashion you would expect it to run two queries against the SQL database doing "SELECT * …" from both tables. However a profiler trace shows just the following single SQL query: select [_].[BU_Code],     [_].[Date],     [_].[Amount],     [_].[BU_Key] from (     select [$Outer].[BU_Code],         [$Outer].[Date],         [$Outer].[Amount],         [$Inner].[BU_Key],         [$Inner].[StartDate],         [$Inner].[EndDate]     from [dbo].[fact] as [$Outer]     left outer join     (         select [_].[BU_Key] as [BU_Key],             [_].[BU_Code] as [BU_Code2],             [_].[BU_Name] as [BU_Name],             [_].[StartDate] as [StartDate],             [_].[EndDate] as [EndDate]         from [dbo].[BU] as [_]     ) as [$Inner] on ([$Outer].[BU_Code] = [$Inner].[BU_Code2] or [$Outer].[BU_Code] is null and [$Inner].[BU_Code2] is null) ) as [_] where [_].[Date] >= [_].[StartDate] and [_].[Date] <= [_].[EndDate] The resulting query is a little strange, you can probably tell that it was generated programmatically. But if you look closely you'll notice that every single part of the Power Query formula has been pushed down to SQL Server. Power Query itself ends up just constructing the query and passing the results back to Excel, it does not do any of the data transformation steps itself. So now you can feel a bit more comfortable showing Power Query to your less technical Colleagues knowing that the tool will do it's best fold all the  small steps in Power Query down the most efficient query that it can against the source systems.

    Read the article

  • Why is my arrow texture being drawn in odd places?

    - by tyjkenn
    This is a script I wrote that places an arrow on the screen, pointing to an enemy off-screen, or, if the enemy is on-screen, it places an arrow hovering above the enemy. Everything seems to work, except for some odd reason, I see random arrows floating around, often skewed and resized (which I really don't understand, because I only rotate and place in this script). Even when I only have one enemy in the scene, I still see these random arrows. It should only be drawing one per enemy. Note: when all enemies are removed, no arrows appear. var arrow : Texture; var cam : Camera; var dim : int = 30; function OnGUI() { var objects = GameObject.FindGameObjectsWithTag("Enemy"); for(var ob : GameObject in objects) { var pos = cam.WorldToViewportPoint(ob.transform.position); if(gameObject.GetComponent(FollowCamera).target != null){ var tar = gameObject.GetComponent(FollowCamera).target.parent; } if(pos.z>1 && ob.transform != tar){ var xDiff = (pos.x*cam.pixelWidth)-(cam.pixelWidth/2); var yDiff = (pos.y*cam.pixelHeight)-(cam.pixelHeight/2); var angle = Mathf.Rad2Deg*Mathf.Atan(yDiff/xDiff)+180; if(xDiff>0) angle += 180; var dist = Mathf.Sqrt(xDiff*xDiff + yDiff*yDiff); var slope = yDiff/xDiff; var camSlope = cam.pixelHeight/cam.pixelWidth; var theX = -1000.0; var theY = -1000.0; var mult = 0; var temp; if(Mathf.Abs(xDiff)>(cam.pixelWidth/2)||Mathf.Abs(yDiff)>(cam.pixelHeight/2)){ //touching right if(slope<camSlope && slope>-camSlope) { if(xDiff>(cam.pixelWidth/2)) { theX = cam.pixelWidth - (dim/2); mult = -1; }else if(xDiff<-(cam.pixelWidth/2)) { theX = (dim/2); mult = 1; } temp = ((cam.pixelWidth/2)*yDiff)/xDiff; theY =(cam.pixelHeight/2)+(mult*temp); } else{ if(yDiff>(cam.pixelHeight/2)) { theY = (dim/2); mult = 1; }else if(yDiff<-(cam.pixelHeight/2)) { theY = cam.pixelHeight - (dim/2); mult = -1; } temp = ((cam.pixelHeight/2)*xDiff)/yDiff; theX =(cam.pixelWidth/2)+(mult*temp); } } else { angle = -90; theX = (cam.pixelWidth/2)+xDiff; theY = (cam.pixelHeight/2)-yDiff-dim; } GUIUtility.RotateAroundPivot(-angle, Vector2(theX, theY)); Graphics.DrawTexture(Rect(theX-(dim/2),theY-(dim/2),dim,dim),arrow,null); GUIUtility.RotateAroundPivot(angle, Vector2(theX, theY)); } } }

    Read the article

  • TechEd 2014 Day 3

    - by John Paul Cook
    There is some confusion about durability of data stored in SQL Server in-memory tables, so some review of the concepts is appropriate. The in-memory option is enabled at the database level. Enabling it at the database level only gives you the option to specify the in-memory feature on a table by table basis. No existing tables or new tables will by default become in-memory tables when you enable the feature at the database level. If you choose to make a table an in-memory table, by default it is...(read more)

    Read the article

  • TechEd 2014 Day 3

    - by John Paul Cook
    There is some confusion about durability of data stored in SQL Server in-memory tables, so some review of the concepts is appropriate. The in-memory option is enabled at the database level. Enabling it at the database level only gives you the option to specify the in-memory feature on a table by table basis. No existing tables or new tables will by default become in-memory tables when you enable the feature at the database level. If you choose to make a table an in-memory table, by default it is...(read more)

    Read the article

  • Mysql - help me optimize this query

    - by sandeepan-nath
    About the system: -The system has a total of 8 tables - Users - Tutor_Details (Tutors are a type of User,Tutor_Details table is linked to Users) - learning_packs, (stores packs created by tutors) - learning_packs_tag_relations, (holds tag relations meant for search) - tutors_tag_relations and tags and orders (containing purchase details of tutor's packs), order_details linked to orders and tutor_details. For a more clear idea about the tables involved please check the The tables section in the end. -A tags based search approach is being followed.Tag relations are created when new tutors register and when tutors create packs (this makes tutors and packs searcheable). For details please check the section How tags work in this system? below. Following is a simpler representation (not the actual) of the more complex query which I am trying to optimize:- I have used statements like explanation of parts in the query select SUM(DISTINCT( t.tag LIKE "%Dictatorship%" )) as key_1_total_matches, SUM(DISTINCT( t.tag LIKE "%democracy%" )) as key_2_total_matches, td., u., count(distinct(od.id_od)), if (lp.id_lp > 0) then some conditional logic on lp fields else 0 as tutor_popularity from Tutor_Details AS td JOIN Users as u on u.id_user = td.id_user LEFT JOIN Learning_Packs_Tag_Relations AS lptagrels ON td.id_tutor = lptagrels.id_tutor LEFT JOIN Learning_Packs AS lp ON lptagrels.id_lp = lp.id_lp LEFT JOIN `some other tables on lp.id_lp - let's call learning pack tables set (including Learning_Packs table)` LEFT JOIN Order_Details as od on td.id_tutor = od.id_author LEFT JOIN Orders as o on od.id_order = o.id_order LEFT JOIN Tutors_Tag_Relations as ttagrels ON td.id_tutor = ttagrels.id_tutor JOIN Tags as t on (t.id_tag = ttagrels.id_tag) OR (t.id_tag = lptagrels.id_tag) where some condition on Users table's fields AND CASE WHEN ((t.id_tag = lptagrels.id_tag) AND (lp.id_lp 0)) THEN `some conditions on learning pack tables set` ELSE 1 END AND CASE WHEN ((t.id_tag = wtagrels.id_tag) AND (wc.id_wc 0)) THEN `some conditions on webclasses tables set` ELSE 1 END AND CASE WHEN (od.id_od0) THEN od.id_author = td.id_tutor and some conditions on Orders table's fields ELSE 1 END AND ( t.tag LIKE "%Dictatorship%" OR t.tag LIKE "%democracy%") group by td.id_tutor HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 order by tutor_popularity desc, u.surname asc, u.name asc limit 0,20 ===================================================================== What does the above query do? Does AND logic search on the search keywords (2 in this example - "Democracy" and "Dictatorship"). Returns only those tutors for which both the keywords are present in the union of the two sets - tutors details and details of all the packs created by a tutor. To make things clear - Suppose a Tutor name "Sandeepan Nath" has created a pack "My first pack", then:- Searching "Sandeepan Nath" returns Sandeepan Nath. Searching "Sandeepan first" returns Sandeepan Nath. Searching "Sandeepan second" does not return Sandeepan Nath. ====================================================================================== The problem The results returned by the above query are correct (AND logic working as per expectation), but the time taken by the query on heavily loaded databases is like 25 seconds as against normal query timings of the order of 0.005 - 0.0002 seconds, which makes it totally unusable. It is possible that some of the delay is being caused because all the possible fields have not yet been indexed, but I would appreciate a better query as a solution, optimized as much as possible, displaying the same results ========================================================================================== How tags work in this system? When a tutor registers, tags are entered and tag relations are created with respect to tutor's details like name, surname etc. When a Tutors create packs, again tags are entered and tag relations are created with respect to pack's details like pack name, description etc. tag relations for tutors stored in tutors_tag_relations and those for packs stored in learning_packs_tag_relations. All individual tags are stored in tags table. ==================================================================== The tables Most of the following tables contain many other fields which I have omitted here. CREATE TABLE IF NOT EXISTS users ( id_user int(10) unsigned NOT NULL AUTO_INCREMENT, name varchar(100) NOT NULL DEFAULT '', surname varchar(155) NOT NULL DEFAULT '', PRIMARY KEY (id_user) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=636 ; CREATE TABLE IF NOT EXISTS tutor_details ( id_tutor int(10) NOT NULL AUTO_INCREMENT, id_user int(10) NOT NULL DEFAULT '0', PRIMARY KEY (id_tutor), KEY Users_FKIndex1 (id_user) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=51 ; CREATE TABLE IF NOT EXISTS orders ( id_order int(10) unsigned NOT NULL AUTO_INCREMENT, PRIMARY KEY (id_order), KEY Orders_FKIndex1 (id_user), ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=275 ; ALTER TABLE orders ADD CONSTRAINT Orders_ibfk_1 FOREIGN KEY (id_user) REFERENCES users (id_user) ON DELETE NO ACTION ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS order_details ( id_od int(10) unsigned NOT NULL AUTO_INCREMENT, id_order int(10) unsigned NOT NULL DEFAULT '0', id_author int(10) NOT NULL DEFAULT '0', PRIMARY KEY (id_od), KEY Order_Details_FKIndex1 (id_order) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=284 ; ALTER TABLE order_details ADD CONSTRAINT Order_Details_ibfk_1 FOREIGN KEY (id_order) REFERENCES orders (id_order) ON DELETE NO ACTION ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS learning_packs ( id_lp int(10) unsigned NOT NULL AUTO_INCREMENT, id_author int(10) unsigned NOT NULL DEFAULT '0', PRIMARY KEY (id_lp), KEY Learning_Packs_FKIndex2 (id_author), KEY id_lp (id_lp) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=23 ; CREATE TABLE IF NOT EXISTS tags ( id_tag int(10) unsigned NOT NULL AUTO_INCREMENT, tag varchar(255) DEFAULT NULL, PRIMARY KEY (id_tag), UNIQUE KEY tag (tag), KEY id_tag (id_tag), KEY tag_2 (tag), KEY tag_3 (tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3419 ; CREATE TABLE IF NOT EXISTS tutors_tag_relations ( id_tag int(10) unsigned NOT NULL DEFAULT '0', id_tutor int(10) DEFAULT NULL, KEY Tutors_Tag_Relations (id_tag), KEY id_tutor (id_tutor), KEY id_tag (id_tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; ALTER TABLE tutors_tag_relations ADD CONSTRAINT Tutors_Tag_Relations_ibfk_1 FOREIGN KEY (id_tag) REFERENCES tags (id_tag) ON DELETE NO ACTION ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS learning_packs_tag_relations ( id_tag int(10) unsigned NOT NULL DEFAULT '0', id_tutor int(10) DEFAULT NULL, id_lp int(10) unsigned DEFAULT NULL, KEY Learning_Packs_Tag_Relations_FKIndex1 (id_tag), KEY id_lp (id_lp), KEY id_tag (id_tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; ALTER TABLE learning_packs_tag_relations ADD CONSTRAINT Learning_Packs_Tag_Relations_ibfk_1 FOREIGN KEY (id_tag) REFERENCES tags (id_tag) ON DELETE NO ACTION ON UPDATE NO ACTION; =================================================================================== Following is the exact query (this includes classes also - tutors can create classes and search terms are matched with classes created by tutors):- select count(distinct(od.id_od)) as tutor_popularity, CASE WHEN (IF((wc.id_wc 0), ( wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date '2010-06-01 22:00:56' AND wccp.status = 1 AND (wccp.country_code='IE' or wccp.country_code IN ('INT'))), 0)) THEN 1 ELSE 0 END as 'classes_published', CASE WHEN (IF((lp.id_lp 0), (lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND (lpcp.country_code='IE' or lpcp.country_code IN ('INT'))),0)) THEN 1 ELSE 0 END as 'packs_published', td . * , u . * from Tutor_Details AS td JOIN Users as u on u.id_user = td.id_user LEFT JOIN Learning_Packs_Tag_Relations AS lptagrels ON td.id_tutor = lptagrels.id_tutor LEFT JOIN Learning_Packs AS lp ON lptagrels.id_lp = lp.id_lp LEFT JOIN Learning_Packs_Categories AS lpc ON lpc.id_lp_cat = lp.id_lp_cat LEFT JOIN Learning_Packs_Categories AS lpcp ON lpcp.id_lp_cat = lpc.id_parent LEFT JOIN Learning_Pack_Content as lpct on (lp.id_lp = lpct.id_lp) LEFT JOIN Webclasses_Tag_Relations AS wtagrels ON td.id_tutor = wtagrels.id_tutor LEFT JOIN WebClasses AS wc ON wtagrels.id_wc = wc.id_wc LEFT JOIN Learning_Packs_Categories AS wcc ON wcc.id_lp_cat = wc.id_wp_cat LEFT JOIN Learning_Packs_Categories AS wccp ON wccp.id_lp_cat = wcc.id_parent LEFT JOIN Order_Details as od on td.id_tutor = od.id_author LEFT JOIN Orders as o on od.id_order = o.id_order LEFT JOIN Tutors_Tag_Relations as ttagrels ON td.id_tutor = ttagrels.id_tutor JOIN Tags as t on (t.id_tag = ttagrels.id_tag) OR (t.id_tag = lptagrels.id_tag) OR (t.id_tag = wtagrels.id_tag) where (u.country='IE' or u.country IN ('INT')) AND CASE WHEN ((t.id_tag = lptagrels.id_tag) AND (lp.id_lp 0)) THEN lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND (lpcp.country_code='IE' or lpcp.country_code IN ('INT')) ELSE 1 END AND CASE WHEN ((t.id_tag = wtagrels.id_tag) AND (wc.id_wc 0)) THEN wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date '2010-06-01 22:00:56' AND wccp.status = 1 AND (wccp.country_code='IE' or wccp.country_code IN ('INT')) ELSE 1 END AND CASE WHEN (od.id_od0) THEN od.id_author = td.id_tutor and o.order_status = 'paid' and CASE WHEN (od.id_wc 0) THEN od.can_attend_class=1 ELSE 1 END ELSE 1 END AND 1 group by td.id_tutor order by tutor_popularity desc, u.surname asc, u.name asc limit 0,20 Please note - The provided database structure does not show all the fields and tables as in this query

    Read the article

  • AT&T U-verse 2Wire Router - Increase session table limit?

    - by caleban
    AT&T U-verse VDSL "fiber to the node" 24Mbit down / 3Mbit up 2Wire Router Model 3800HGV-B Software Version 6.1.9.24-enh.tm The 2Wire router appears to have a limit of 1024 TCP and UDP sessions. This limit appears to apply to all sessions regardless of any static IP, firewall off, DMZ plus, secondary router configurations. I've tried using the 2Wire router alone and also configuring the 2Wire static IP addressing, firewall off, DMZ plus, etc. setup along with my own pfSense router/firewall. Either way it appears I exceed the 1024 session limit and sessions start being reset. Running out of sessions isn't being caused by torrents or p2p etc. We're a business and our legitimate uses are exceeding this session limit. AT&T tells me it's not possible to bridge the router or increase or avoid the session table limit. I'm curious if anyone has found a way around either of these issues.

    Read the article

  • Why might powerpoint not let me adjust the height of a table row?

    - by YGA
    Powerpoint is fighting me every time I try to adjust the height of a table row, and I'm wondering if folks have ideas why that might be the case. See the attached picture; the Argentina row is of height 0.41", while the Nicaragua row is 0.61". Whenever I change to change the Nicaragua row (either by manually moving the row line, or by typing in a new height into the box) powerpoint immediately resets it. The difference? The Argentina row I typed in directly, while the Nicaragua row I pasted in from Excel. Thoughts what might be the difference?

    Read the article

  • How to I create a table of contents in a Word document that has a mind of it's own?

    - by Howiecamp
    I'm embarrassed to admit that I'm struggling to get a table of contents going in a Word doc that's already been created. I know enough to understand that the TOC is based on the type of the header/style and indentation. My approach so far has been to auto-generate the TOC and then try (unsuccessfully) to fix the problems; perhaps this isn't the best approach in this situation. What's happening is that the TOC is missing half my sections and for others it's adding way too much detail. Again my sense is I have to "fix" individual section headings but I haven't been successful so far.

    Read the article

  • Advised auditing method for MS SQL to track changes made to a specific table by a specific user?

    - by scape
    What is the best method for tracking changes or logging the queries done to a table by a specific user when the person is using Management Studio? I'm using 2008 R2 Express Edition and want to specifically track a single user who logs in through Management studio and runs queries to make changes manually. I want to see what query was run and thus determine what was changed and how. I am not interested in restoring the information. I considered Change Tracking but read that it is not ideal for auditing as well I am unsure how to read the data, then I considered the Bulk-Logging option on the database however I then have to consider handling the log files which may grow huge as the database is used constantly by a web app. I am wondering if there is a more concise method to do what I want?

    Read the article

  • How can we recover/restore lost/overwritten data in our MSSQL 2008 table?

    - by TeTe
    I am in serious trouble and I am seeking professional advices here. We are using MSSQL server 2008. We removed primary key, replaced exiting data with new data resulted losing our critical business data in its child tables on MSSQL Server. It was completely human mistake and we didn't have disk failure. 1) The last backup file was a month ago which means it is useless. 2) We created Maintenance Plans to backup our database at 12AM everyday but those files are nohwere to be found 3) A friend of mine said we can recover from Transaction Logs. When I go to TaskRestore Transaction log is dimmed/disabled. 4) I checked ManagementMaintenance Plans. I can't find any restored point there. It seems that our maintenance plan hasn't been working. Is there any third party tool to recover lost/overwritten data from MSSQL table? Thanks a lot.

    Read the article

  • HTML Tables with jQuery Filtering

    - by Bry4n
    Let's say I have... <form action="#"> <fieldset> to:<input type="text" name="search" value="" id="to" /> from:<input type="text" name="search" value="" id="from" /> </fieldset> </form> <table border=1"> <tr class="headers"> <th class="bluedata"height="20px" valign="top">63rd St. &amp; Malvern Av. Loop<BR/></th> <th class="yellowdata"height="20px" valign="top">52nd St. &amp; Lansdowne Av.<BR/></th> <th class="bluedata"height="20px" valign="top">Lancaster &amp; Girard Avs<BR/></th> <th class="yellowdata"height="20px" valign="top">40th St. &amp; Lancaster Av.<BR/></th> <th class="bluedata"height="20px" valign="top">36th &amp; Market Sts<BR/></th> <th class="yellowdata"height="20px" valign="top">Juniper Station<BR/></th> </tr> <tr> <td class="bluedata"height="20px" title="63rd St. &amp; Malvern Av. Loop"> <table width="100%"><tr><td>12:17am</td></tr><tr><td>12:17am</td></tr><tr><td>12:47am</td></tr></table> </td> <td class="yellowdata"height="20px" title="52nd St. &amp; Lansdowne Av."> <table width="100%"><tr><td>12:17am</td></tr><tr><td>12:17am</td></tr><tr><td>12:47am</td></tr></table> </td> <td class="bluedata"height="20px" title="Lancaster &amp; Girard Avs"> <table width="100%"><tr><td>12:17am</td></tr><tr><td>12:17am</td></tr><tr><td>12:47am</td></tr></table> </td> <td class="yellowdata"height="20px" title="40th St. &amp; Lancaster Av."> <table width="100%"><tr><td>12:17am</td></tr><tr><td>12:17am</td></tr><tr><td>12:47am</td></tr></table> </td> <td class="bluedata"height="20px" title="36th &amp; Market Sts"> <table width="100%"><tr><td>12:17am</td></tr><tr><td>12:17am</td></tr><tr><td>12:47am</td></tr></table> </td> <td class="bluedata"height="20px" title="Juniper Station"> <table width="100%"><tr><td>12:17am</td></tr><tr><td>12:17am</td></tr><tr><td>12:47am</td></tr></table> </td> </tr> </table> Now depending upon what data is typed into the textboxes, I need the table trs/tds to show or hide. So if I type in 63rd in "to" box, and juniper in the "from" box, I need only those two trs/tds showing in that order and none of the others.

    Read the article

  • Squid proxy not serving modified html content

    - by Matthew
    I'm trying to use squid to modify the page content of web page requests. I followed the upside-down-ternet tutorial which showed instructions for how to flip images on pages. I need to change the actual html of the page. I've been trying to do the same thing as in the tutorial, but instead of editing the image I'm trying to edit the html page. Below is a php script I'm using to try to do it. All jpg images get flipped, but the content on the page does not get edited. The edited index.html files written contain the edited content, but the pages the users receive don't contain the edited content. #!/usr/bin/php <?php $temp = array(); while ( $input = fgets(STDIN) ) { $micro_time = microtime(); // Split the output (space delimited) from squid into an array. $temp = split(' ', $input); //Flip jpg images, this works correctly if (preg_match("/.*\.jpg/i", $temp[0])) { system("/usr/bin/wget -q -O /var/www/cache/$micro_time.jpg ". $temp[0]); system("/usr/bin/mogrify -flip /var/www/cache/$micro_time.jpg"); echo "http://127.0.0.1/cache/$micro_time.jpg\n"; } //Don't edit files that are obviously not html. $temp[0] contains url of file to get elseif (preg_match("/(jpg|png|gif|css|js|\(|\))/i", $temp[0], $matches)) { echo $input; } //Otherwise, could be html (e.g. `wget http://www.google.com` downloads index.html) else{ $time = time() . microtime(); //For unique directory names $time = preg_replace("/ /", "", $time); //Simplify things by removing the spaces mkdir("/var/www/cache/". $time); //Create unique folder system("/usr/bin/wget -q --directory-prefix=\"/var/www/cache/$time/\" ". $temp[0]); $filename = system("ls /var/www/cache/$time/"); //Get filename of downloaded file //File is html, edit the content (this does not work) if(preg_match("/.*\.html/", $filename)){ //Get the html file contents $contentfh = fopen("/var/www/cache/$time/". $filename, 'r'); $content = fread($contentfh, filesize("/var/www/cache/$time/". $filename)); fclose($contentfh); //Edit the html file contents $content = preg_replace("/<\/body>/i", "<!-- content served by proxy --></body>", $content); //Write the edited file $contentfh = fopen("/var/www/cache/$time/". $filename, 'w'); fwrite($contentfh, $content); fclose($contentfh); //Return the edited page echo "http://127.0.0.1/cache/$time/$filename\n"; } //Otherwise file is not html, don't edit else{ echo $input; } } } ?>

    Read the article

  • Function inserted not all records

    - by user1799459
    I wrote the following code by data transfer from Access to Firebird def FirebirdDatetime(dt): return '\'%s.%s.%s\'' % (str(dt.day).rjust(2,'0'), str(dt.month).rjust(2,'0'), str(dt.year).rjust(4,'0')) def SelectFromAccessTable(tablename): return 'select * from [' + tablename+']' def InsertToFirebirdTable(tablename, row): values='' i=0 for elem in row: i+=1 #print type(elem) if type(elem) == int: temp = str(elem) elif (type(elem) == str) or (type(elem)==unicode): temp = '\'%s\'' % (elem,) elif type(elem) == datetime.datetime: temp =FirebirdDatetime(elem) elif type(elem) == decimal.Decimal: temp = str(elem) elif elem==None: temp='null' if (i<len(row)): values+=temp+', ' else: values+=temp return 'insert into '+tablename+' values ('+values+')' def AccessToFirebird(accesstablename, firebirdtablename, accesscursor, firebirdcursor): SelectSql=SelectFromAccessTable(accesstablename) for row in accesscursor.execute(SelectSql): InsertSql=InsertToFirebirdTable(firebirdtablename, row) InsertSql=InsertSql print InsertSql firebirdcursor.execute(InsertSql) In the main module there is an AccessToFirebird function call conAcc = pyodbc.connect('DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=D:\ThirdTask\Northwind.accdb') SqlAccess=conAcc.cursor(); conn.begin() cur=conn.cursor() sql.AccessToFirebird('Customers', 'CLIENTS', SqlAccess, cur) conn.commit() conn.begin() cur=conn.cursor() sql.AccessToFirebird('??????????', 'EMPLOYEES', SqlAccess, cur) sql.AccessToFirebird('????', 'ROLES', SqlAccess, cur) sql.AccessToFirebird('???? ???????????', 'EMPLOYEES_ROLES', SqlAccess, cur) sql.AccessToFirebird('????????', 'DELIVERY', SqlAccess, cur) sql.AccessToFirebird('??????????', 'SUPPLIERS', SqlAccess, cur) sql.AccessToFirebird('????????? ?????? ???????', 'TAX_STATUS_OF_ORDERS', SqlAccess, cur) sql.AccessToFirebird('????????? ???????? ? ??????', 'STATE_ORDER_DETAILS', SqlAccess, cur) sql.AccessToFirebird('????????? ???????', 'CONDITION_OF_ORDERS', SqlAccess, cur) sql.AccessToFirebird('??????', 'ORDERS', SqlAccess, cur) sql.AccessToFirebird('?????', 'BILLS', SqlAccess, cur) sql.AccessToFirebird('????????? ?????? ?? ????????????', 'STATUS_PURCHASE_ORDER', SqlAccess, cur) sql.AccessToFirebird('?????? ?? ????????????', 'ORDERS_FOR_ACQUISITION', SqlAccess, cur) sql.AccessToFirebird('???????? ? ?????? ?? ????????????', 'INFORMPURCHASEORDER', SqlAccess, cur) sql.AccessToFirebird('??????', 'PRODUCTS', SqlAccess, cur) conn.commit() conAcc.commit() conn.close() conAcc.close() But as a result, not all records have been inserted into the table Products (Table Goods - Northwind database), for example, does not work request insert into PRODUCTS values ('4', 1, 'NWTB-1', '?????????? ???', null, 13.5000, 18.0000, 10, 40, '10 ??????? ?? 20 ?????????', '10 ??????? ?? 20 ?????????', 10, '???????', '') In ibexpert to this request message issued can't format message 13:587 -- message file C:\Windows\firebird.msg not found. conversion error from string "10 ?????????±???? ???? 20 ???°???µ?‚????????". Worked only requests insert into PRODUCTS values ('1', 82, 'NWTC-82', '???????', null, 2.0000, 4.0000, 20, 100, null, null, null, '????', '') insert into PRODUCTS values ('9', 83, 'NWTCS-83', '???????????? ?????', null, 0.5000, 1.8000, 30, 200, null, null, null, '????? ? ???????', '') insert into PRODUCTS values ('1', 97, 'NWTC-82', '???????', null, 3.0000, 5.0000, 50, 200, null, null, null, '????', '') insert into PRODUCTS values ('6', 98, 'NWTSO-98', '??????? ???', null, 1.0000, 1.8900, 100, 200, null, null, null, '????', '') insert into PRODUCTS values ('6', 99, 'NWTSO-99', '??????? ??????', null, 1.0000, 1.9500, 100, 200, null, null, null, '????', '') other records were not inserted.

    Read the article

  • Vim + OmniCppComplete: Completing on Class Members which are STL containers

    - by Robert S. Barnes
    Completion on class members which are STL containers is failing. Completion on local objects which are STL containers works fine. For example, given the following files: // foo.h #include <string> class foo { public: void set_str(const std::string &); std::string get_str_reverse( void ); private: std::string str; }; // foo.cpp #include "foo.h" using std::string; string foo::get_str_reverse ( void ) { string temp; temp.assign(str); reverse(temp.begin(), temp.end()); return temp; } /* ----- end of method foo::get_str ----- */ void foo::set_str ( const string &s ) { str.assign(s); } /* ----- end of method foo::set_str ----- */ I've generated the tags for these two files using: ctags -R --c++-kinds=+pl --fields=+iaS --extra=+q . When I type temp. in the cpp I get a list of string member functions as expected. But if I type str. omnicppcomplete spits out "Pattern Not Found". I've noticed that the temp. completion only works if I have the using std::string; declaration. How do I get completion to work on my class members which are STL containers? Edit I found that completion on members which are STL containers works if I make the follow modifications to the header: // foo.h #include <string> using std::string; class foo { public: void set_str(const string &); string get_str_reverse( void ); private: string str; }; Basically, if I add using std::string; and then remove the std:: name space qualifier from the string str; member and regenerate the tags file then OmniCppComplete is able to do completion on str.. It doesn't seem to matter whether or not I have let OmniCpp_DefaultNamespaces = ["std", "_GLIBCXX_STD"] set in the .vimrc. The problem is that putting using declarations in header files seems like a big no-no, so I'm back to square one.

    Read the article

  • C# Serialization Surrogate - Cannot access a disposed object

    - by crushhawk
    I have an image class (VisionImage) that is a black box to me. I'm attempting to serialize the image object to file using Serialization Surrogates as explained on this page. Below is my surrogate code. sealed class VisionImageSerializationSurrogate : ISerializationSurrogate { public void GetObjectData(Object obj, SerializationInfo info, StreamingContext context) { VisionImage image = (VisionImage)obj; byte[,] temp = image.ImageToArray().U8; info.AddValue("width", image.Width); info.AddValue("height", image.Height); info.AddValue("pixelvalues", temp, temp.GetType()); } public Object SetObjectData(Object obj, SerializationInfo info, StreamingContext context, ISurrogateSelector selector) { VisionImage image = (VisionImage)obj; Int32 width = info.GetInt32("width"); Int32 height = info.GetInt32("height"); byte[,] temp = new byte[height, width]; temp = (byte[,])info.GetValue("pixelvalues", temp.GetType()); PixelValue2D tempPixels = new PixelValue2D(temp); image.ArrayToImage(tempPixels); return image; } } I've stepped through it to write to binary. It seems to be working fine (file is getting bigger as the images are captured). I tried to test it read the file back in. The values read back in are correct as far as the "info" object is concerned. When I get to the line image.ArrayToImage(tempPixels); It throws the "Cannot access a disposed object" exception. Upon further inspection, the object and the resulting image are both marked as disposed. My code behind the form spawns an "acquisitionWorker" and runs the following code. void acquisitionWorker_LoadImages(object sender, DoWorkEventArgs e) { // This is the main function of the acquisition background worker thread. // Perform image processing here instead of the UI thread to avoid a // sluggish or unresponsive UI. BackgroundWorker worker = (BackgroundWorker)sender; try { uint bufferNumber = 0; // Loop until we tell the thread to cancel or we get an error. When this // function completes the acquisitionWorker_RunWorkerCompleted method will // be called. while (!worker.CancellationPending) { VisionImage savedImage = (VisionImage) formatter.Deserialize(fs); CommonAlgorithms.Copy(savedImage, imageViewer.Image); // Update the UI by calling ReportProgress on the background worker. // This will call the acquisition_ProgressChanged method in the UI // thread, where it is safe to update UI elements. Do not update UI // elements directly in this thread as doing so could result in a // deadlock. worker.ReportProgress(0, bufferNumber); bufferNumber++; } } catch (ImaqException ex) { // If an error occurs and the background worker thread is not being // cancelled, then pass the exception along in the result so that // it can be handled in the acquisition_RunWorkerCompleted method. if (!worker.CancellationPending) e.Result = ex; } } What am I missing here? Why would the object be immediately disposed?

    Read the article

  • Using a "white list" for extracting terms for Text Mining, Part 2

    - by [email protected]
    In my last post, we set the groundwork for extracting specific tokens from a white list using a CTXRULE index. In this post, we will populate a table with the extracted tokens and produce a case table suitable for clustering with Oracle Data Mining. Our corpus of documents will be stored in a database table that is defined as create table documents(id NUMBER, text VARCHAR2(4000)); However, any suitable Oracle Text-accepted data type can be used for the text. We then create a table to contain the extracted tokens. The id column contains the unique identifier (or case id) of the document. The token column contains the extracted token. Note that a given document many have many tokens, so there will be one row per token for a given document. create table extracted_tokens (id NUMBER, token VARCHAR2(4000)); The next step is to iterate over the documents and extract the matching tokens using the index and insert them into our token table. We use the MATCHES function for matching the query_string from my_thesaurus_rules with the text. DECLARE     cursor c2 is       select id, text       from documents; BEGIN     for r_c2 in c2 loop        insert into extracted_tokens          select r_c2.id id, main_term token          from my_thesaurus_rules          where matches(query_string,                        r_c2.text)>0;     end loop; END; Now that we have the tokens, we can compute the term frequency - inverse document frequency (TF-IDF) for each token of each document. create table extracted_tokens_tfidf as   with num_docs as (select count(distinct id) doc_cnt                     from extracted_tokens),        tf       as (select a.id, a.token,                            a.token_cnt/b.num_tokens token_freq                     from                        (select id, token, count(*) token_cnt                        from extracted_tokens                        group by id, token) a,                       (select id, count(*) num_tokens                        from extracted_tokens                        group by id) b                     where a.id=b.id),        doc_freq as (select token, count(*) overall_token_cnt                     from extracted_tokens                     group by token)   select tf.id, tf.token,          token_freq * ln(doc_cnt/df.overall_token_cnt) tf_idf   from num_docs,        tf,        doc_freq df   where df.token=tf.token; From the WITH clause, the num_docs query simply counts the number of documents in the corpus. The tf query computes the term (token) frequency by computing the number of times each token appears in a document and divides that by the number of tokens found in the document. The doc_req query counts the number of times the token appears overall in the corpus. In the SELECT clause, we compute the tf_idf. Next, we create the nested table required to produce one record per case, where a case corresponds to an individual document. Here, we COLLECT all the tokens for a given document into the nested column extracted_tokens_tfidf_1. CREATE TABLE extracted_tokens_tfidf_nt              NESTED TABLE extracted_tokens_tfidf_1                  STORE AS extracted_tokens_tfidf_tab AS              select id,                     cast(collect(DM_NESTED_NUMERICAL(token,tf_idf)) as DM_NESTED_NUMERICALS) extracted_tokens_tfidf_1              from extracted_tokens_tfidf              group by id;   To build the clustering model, we create a settings table and then insert the various settings. Most notable are the number of clusters (20), using cosine distance which is better for text, turning off auto data preparation since the values are ready for mining, the number of iterations (20) to get a better model, and the split criterion of size for clusters that are roughly balanced in number of cases assigned. CREATE TABLE km_settings (setting_name  VARCHAR2(30), setting_value VARCHAR2(30)); BEGIN  INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.clus_num_clusters, 20);  INSERT INTO km_settings (setting_name, setting_value)     VALUES (dbms_data_mining.kmns_distance, dbms_data_mining.kmns_cosine);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.prep_auto,dbms_data_mining.prep_auto_off);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.kmns_iterations,20);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.kmns_split_criterion,dbms_data_mining.kmns_size);   COMMIT; END; With this in place, we can now build the clustering model. BEGIN     DBMS_DATA_MINING.CREATE_MODEL(     model_name          => 'TEXT_CLUSTERING_MODEL',     mining_function     => dbms_data_mining.clustering,     data_table_name     => 'extracted_tokens_tfidf_nt',     case_id_column_name => 'id',     settings_table_name => 'km_settings'); END;To generate cluster names from this model, check out my earlier post on that topic.

    Read the article

  • Using RIA DomainServices with ASP.NET and MVC 2

    - by Bobby Diaz
    Recently, I started working on a new ASP.NET MVC 2 project and I wanted to reuse the data access (LINQ to SQL) and business logic methods (WCF RIA Services) that had been developed for a previous project that used Silverlight for the front-end.  I figured that I would be able to instantiate the various DomainService classes from within my controller’s action methods, because after all, the code for those services didn’t look very complicated.  WRONG!  I didn’t realize at first that some of the functionality is handled automatically by the framework when the domain services are hosted as WCF services.  After some initial searching, I came across an invaluable post by Joe McBride, which described how to get RIA Service .svc files to work in an MVC 2 Web Application, and another by Brad Abrams.  Unfortunately, Brad’s solution was for an earlier preview release of RIA Services and no longer works with the version that I am running (PDC Preview). I have not tried the RC version of WCF RIA Services, so I am not sure if any of the issues I am having have been resolved, but I wanted to come up with a way to reuse the shared libraries so I wouldn’t have to write a non-RIA version that basically did the same thing.  The classes I came up with work with the scenarios I have encountered so far, but I wanted to go ahead and post the code in case someone else is having the same trouble I had.  Hopefully this will save you a few headaches! 1. Querying When I first tried to use a DomainService class to perform a query inside one of my controller’s action methods, I got an error stating that “This DomainService has not been initialized.”  To solve this issue, I created an extension method for all DomainServices that creates the required DomainServiceContext and passes it to the service’s Initialize() method.  Here is the code for the extension method; notice that I am creating a sort of mock HttpContext for those cases when the service is running outside of IIS, such as during unit testing!     public static class ServiceExtensions     {         /// <summary>         /// Initializes the domain service by creating a new <see cref="DomainServiceContext"/>         /// and calling the base DomainService.Initialize(DomainServiceContext) method.         /// </summary>         /// <typeparam name="TService">The type of the service.</typeparam>         /// <param name="service">The service.</param>         /// <returns></returns>         public static TService Initialize<TService>(this TService service)             where TService : DomainService         {             var context = CreateDomainServiceContext();             service.Initialize(context);             return service;         }           private static DomainServiceContext CreateDomainServiceContext()         {             var provider = new ServiceProvider(new HttpContextWrapper(GetHttpContext()));             return new DomainServiceContext(provider, DomainOperationType.Query);         }           private static HttpContext GetHttpContext()         {             var context = HttpContext.Current;   #if DEBUG             // create a mock HttpContext to use during unit testing...             if ( context == null )             {                 var writer = new StringWriter();                 var request = new SimpleWorkerRequest("/", "/",                     String.Empty, String.Empty, writer);                   context = new HttpContext(request)                 {                     User = new GenericPrincipal(new GenericIdentity("debug"), null)                 };             } #endif               return context;         }     }   With that in place, I can use it almost as normally as my first attempt, except with a call to Initialize():     public ActionResult Index()     {         var service = new NorthwindService().Initialize();         var customers = service.GetCustomers();           return View(customers);     } 2. Insert / Update / Delete Once I got the records showing up, I was trying to insert new records or update existing data when I ran into the next issue.  I say issue because I wasn’t getting any kind of error, which made it a little difficult to track down.  But once I realized that that the DataContext.SubmitChanges() method gets called automatically at the end of each domain service submit operation, I could start working on a way to mimic the behavior of a hosted domain service.  What I came up with, was a base class called LinqToSqlRepository<T> that basically sits between your implementation and the default LinqToSqlDomainService<T> class.     [EnableClientAccess()]     public class NorthwindService : LinqToSqlRepository<NorthwindDataContext>     {         public IQueryable<Customer> GetCustomers()         {             return this.DataContext.Customers;         }           public void InsertCustomer(Customer customer)         {             this.DataContext.Customers.InsertOnSubmit(customer);         }           public void UpdateCustomer(Customer currentCustomer)         {             this.DataContext.Customers.TryAttach(currentCustomer,                 this.ChangeSet.GetOriginal(currentCustomer));         }           public void DeleteCustomer(Customer customer)         {             this.DataContext.Customers.TryAttach(customer);             this.DataContext.Customers.DeleteOnSubmit(customer);         }     } Notice the new base class name (just change LinqToSqlDomainService to LinqToSqlRepository).  I also added a couple of DataContext (for Table<T>) extension methods called TryAttach that will check to see if the supplied entity is already attached before attempting to attach it, which would cause an error! 3. LinqToSqlRepository<T> Below is the code for the LinqToSqlRepository class.  The comments are pretty self explanatory, but be aware of the [IgnoreOperation] attributes on the generic repository methods, which ensures that they will be ignored by the code generator and not available in the Silverlight client application.     /// <summary>     /// Provides generic repository methods on top of the standard     /// <see cref="LinqToSqlDomainService&lt;TContext&gt;"/> functionality.     /// </summary>     /// <typeparam name="TContext">The type of the context.</typeparam>     public abstract class LinqToSqlRepository<TContext> : LinqToSqlDomainService<TContext>         where TContext : System.Data.Linq.DataContext, new()     {         /// <summary>         /// Retrieves an instance of an entity using it's unique identifier.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="keyValues">The key values.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual TEntity GetById<TEntity>(params object[] keyValues) where TEntity : class         {             var table = this.DataContext.GetTable<TEntity>();             var mapping = this.DataContext.Mapping.GetTable(typeof(TEntity));               var keys = mapping.RowType.IdentityMembers                 .Select((m, i) => m.Name + " = @" + i)                 .ToArray();               return table.Where(String.Join(" && ", keys), keyValues).FirstOrDefault();         }           /// <summary>         /// Creates a new query that can be executed to retrieve a collection         /// of entities from the <see cref="DataContext"/>.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <returns></returns>         [IgnoreOperation]         public virtual IQueryable<TEntity> GetEntityQuery<TEntity>() where TEntity : class         {             return this.DataContext.GetTable<TEntity>();         }           /// <summary>         /// Inserts the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Insert<TEntity>(TEntity entity) where TEntity : class         {             //var table = this.DataContext.GetTable<TEntity>();             //table.InsertOnSubmit(entity);               return this.Submit(entity, null, DomainOperation.Insert);         }           /// <summary>         /// Updates the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Update<TEntity>(TEntity entity) where TEntity : class         {             return this.Update(entity, null);         }           /// <summary>         /// Updates the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <param name="original">The original.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Update<TEntity>(TEntity entity, TEntity original)             where TEntity : class         {             if ( original == null )             {                 original = GetOriginal(entity);             }               var table = this.DataContext.GetTable<TEntity>();             table.TryAttach(entity, original);               return this.Submit(entity, original, DomainOperation.Update);         }           /// <summary>         /// Deletes the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Delete<TEntity>(TEntity entity) where TEntity : class         {             //var table = this.DataContext.GetTable<TEntity>();             //table.TryAttach(entity);             //table.DeleteOnSubmit(entity);               return this.Submit(entity, null, DomainOperation.Delete);         }           protected virtual bool Submit(Object entity, Object original, DomainOperation operation)         {             var entry = new ChangeSetEntry(0, entity, original, operation);             var changes = new ChangeSet(new ChangeSetEntry[] { entry });             return base.Submit(changes);         }           private TEntity GetOriginal<TEntity>(TEntity entity) where TEntity : class         {             var context = CreateDataContext();             var table = context.GetTable<TEntity>();             return table.FirstOrDefault(e => e == entity);         }     } 4. Conclusion So there you have it, a fully functional Repository implementation for your RIA Domain Services that can be consumed by your ASP.NET and MVC applications.  I have uploaded the source code along with unit tests and a sample web application that queries the Customers table from inside a Controller, as well as a Silverlight usage example. As always, I welcome any comments or suggestions on the approach I have taken.  If there is enough interest, I plan on contacting Colin Blair or maybe even the man himself, Brad Abrams, to see if this is something worthy of inclusion in the WCF RIA Services Contrib project.  What do you think? Enjoy!

    Read the article

  • Mal kurz erklärt: Advanced Security Option (ASO)

    - by Anne Manke
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Heinz-Wilhelm Fabry 12.00 Normal 0 false false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Heinz-Wilhelm Fabry 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} WER? Kunden, die die Oracle Datenbank Enterprise Edition einsetzen und deren Sicherheitsabteilungen bzw. Fachabteilungen die Daten- und/oder Netzwerkverschlüsselung fordern und / oder die personenbezogene Daten in Oracle Datenbanken speichern und / oder die den Zugang zu Datenbanksystemen von der Eingabe Benutzername/Passwort auf Smartcards oder Kerberos umstellen wollen. Heinz-Wilhelm Fabry 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} WAS? Durch das Aktivieren der Option Advanced Security können folgende Anforderungen leicht erfüllt werden: Einzelne Tabellenspalten gezielt verschlüsselt ablegen, wenn beispielsweise der Payment Card Industry Data Security Standard (PCI DSS) oder der Europäischen Datenschutzrichtlinie eine Verschlüsselung bestimmter Daten nahelegen Sichere Datenablage – Verschlüsselung aller Anwendungsdaten Keine spürbare Performance-Veränderung Datensicherungen sind automatisch verschlüsselt - Datendiebstahl aus Backups wird verhindert Verschlüsselung der Netzwerkübertragung – Sniffer-Tools können keine lesbaren Daten abgreifen Aktuelle Verschlüsselungsalgorithmen werden genutzt (AES256, 3DES168, u.a.) Heinz-Wilhelm Fabry 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} WIE? Die Oracle Advanced Security Option ist ein wichtiger Baustein einer ganzheitlichen Sicherheitsarchitektur. Mit ihr lässt sich das Risiko eines Datenmissbrauchs erheblich reduzieren und implementiert ebenfalls den Schutz vor Nicht-DB-Benutzer, wie „root unter Unix“. Somit kann „root“ nicht mehr unerlaubterweise die Datenbank-Files lesen . ASO deckt den kompletten physikalischen Stack ab. Von der Kommunikation zwischen dem Client und der Datenbank, über das verschlüsselte Ablegen der Daten ins Dateisystem bis hin zur Aufbewahrung der Daten in einem Backupsystem. Heinz-Wilhelm Fabry 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Das BVA (Bundesverwaltungsamt) bietet seinen Kunden mit dem neuen Personalverwaltungssystem EPOS 2.0 mehr Sicherheit durch Oracle Sicherheitstechnologien an. Heinz-Wilhelm Fabry 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:12.0pt; mso-para-margin-left:0cm; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Und sonst so? Verschlüsselung des Netzwerkverkehrs Wie beeinflusst die Netzwerkverschlüsselung die Performance? Unsere Kunden bestätigen ständig, dass sie besonders in modernen Mehr-Schichten-Architekturen Anwender kaum Performance-Einbußen feststellen. Falls genauere Daten zur Performance benötigt werden, sind realitätsnahe, kundenspezifische Tests unerlässlich. Verschlüsselung von Anwendungsdaten (Transparent Data Encryption-TDE ) Muss ich meine Anwendungen umschreiben, damit sie TDE nutzen können? NEIN. TDE ist völlig transparent für Ihre Anwendungen. Kann ich nicht auch durch meine Applikation die Daten verschlüsseln? Ja - die Applikationsdaten werden dadurch allerdings nur in LOBs oder Textfeldern gespeichert. Und das hat gravierende Nachteile: Es existieren zum Beispiel keine Datums- /Zahlenfelder. Daraus folgt, dass auf diesen Daten kein sinnvolles Berichtsverfahren funktioniert. Auch können Applikationen nicht mit den Daten arbeiten, die von einer anderen Applikation verschlüsselt wurden. Der wichtigste Aspekt gegen die Verschlüsselung innerhalb einer Applikation ist allerdings die Performanz. Da keine Indizes auf die durch eine Applikation verschlüsselten Daten erstellt werden können, wird die Datenbank bei jedem Zugriff ein Full-Table-Scan durchführen, also jeden Satz der betroffenen Tabelle lesen. Dadurch steigt der Ressourcenbedarf möglicherweise enorm und daraus resultieren wiederum möglicherweise höhere Lizenzkosten. Mit ASO verschlüsselte Daten können von der Oracle DB Firewall gelesen und ausgewertet werden. Warum sollte ich TDE nutzen statt einer kompletten Festplattenverschlüsselung? TDE bietet einen weitergehenden Schutz. Denn TDE schützt auch vor Systemadministratoren, die zwar keinen Zugriff auf die Datenbank, aber auf der Betriebssystemebene Zugriff auf die Datenbankdateien haben. Ausserdem bleiben einmal verschlüsselte Daten verschlüsselt, egal wo diese hinkopiert werden. Dies ist bei einer Festplattenverschlüssung nicht der Fall. Welche Verschlüsselungsalgorithmen stehen zur Verfügung? AES (256-, 192-, 128-bit key) 3DES (3-key)

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >