Search Results

Search found 46513 results on 1861 pages for 'database size'.

Page 195/1861 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • java/swing: font size selection + rendering

    - by Jason S
    I want to create and fill/stroke a path that consists of an outer boundary which is a square of side d and an inner boundary that is the outline of any of the capital letters. How can I do this? (challenges = creating a mask from a font, and figuring out the right size/position to use)

    Read the article

  • Changing Form Size in VS 2008

    - by Dcurvez
    good morning all :) was wondering if anyone can tell me how come I cant get my windows form size to go to 1280x 768 in vs 2008? My resolution that I am working on is 1024x768..but the computer that I am going to be running this program on is a wide screen..1280x768. I try to change it in properties but it keeps defaulting back to 1036x760.

    Read the article

  • Calculate needed size for a TLabel

    - by Tom
    Ok, here's the problem. I have a label component in a panel. The label is aligned as alClient and has wordwrap enabled. The text can vary from one line to several lines. I would like to re-size the height of the the panel (and the label) to fit all the text. How do I get the necessary height of a label when I know the text and the width of the panel?

    Read the article

  • android problem in getting the screen resolution size

    - by Sivaganesan.r
    DisplayMetrics displaymetrics = new DisplayMetrics(); getWindowManager().getDefaultDisplay().getMetrics(displaymetrics); height = displaymetrics.heightPixels; width = displaymetrics.widthPixels; Log.e("FirstImage", "Width = "+width+"Height = "+height); the above was the code i have used to display the screen sizes.. but the problem is iam getting width=320 and height=569. but am using motorola milestone screen size is 480x854 please suggest me in this........... thanks in advance

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Basics of Join Factorization

    - by Hong Su
    We continue our series on optimizer transformations with a post that describes the Join Factorization transformation. The Join Factorization transformation was introduced in Oracle 11g Release 2 and applies to UNION ALL queries. Union all queries are commonly used in database applications, especially in data integration applications. In many scenarios the branches in a UNION All query share a common processing, i.e, refer to the same tables. In the current Oracle execution strategy, each branch of a UNION ALL query is evaluated independently, which leads to repetitive processing, including data access and join. The join factorization transformation offers an opportunity to share the common computations across the UNION ALL branches. Currently, join factorization only factorizes common references to base tables only, i.e, not views. Consider a simple example of query Q1. Q1:    select t1.c1, t2.c2    from t1, t2, t3    where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c2 = 2 and t2.c2 = t3.c2   union all    select t1.c1, t2.c2    from t1, t2, t4    where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c3 = t4.c3; Table t1 appears in both the branches. As does the filter predicates on t1 (t1.c1 > 1) and the join predicates involving t1 (t1.c1 = t2.c1). Nevertheless, without any transformation, the scan (and the filtering) on t1 has to be done twice, once per branch. Such a query may benefit from join factorization which can transform Q1 into Q2 as follows: Q2:    select t1.c1, VW_JF_1.item_2    from t1, (select t2.c1 item_1, t2.c2 item_2                   from t2, t3                    where t2.c2 = t3.c2 and t2.c2 = 2                                  union all                   select t2.c1 item_1, t2.c2 item_2                   from t2, t4                    where t2.c3 = t4.c3) VW_JF_1    where t1.c1 = VW_JF_1.item_1 and t1.c1 > 1; In Q2, t1 is "factorized" and thus the table scan and the filtering on t1 is done only once (it's shared). If t1 is large, then avoiding one extra scan of t1 can lead to a huge performance improvement. Another benefit of join factorization is that it can open up more join orders. Let's look at query Q3. Q3:    select *    from t5, (select t1.c1, t2.c2                  from t1, t2, t3                  where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c2 = 2 and t2.c2 = t3.c2                 union all                  select t1.c1, t2.c2                  from t1, t2, t4                  where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c3 = t4.c3) V;   where t5.c1 = V.c1 In Q3, view V is same as Q1. Before join factorization, t1, t2 and t3 must be joined first before they can be joined with t5. But if join factorization factorizes t1 from view V, t1 can then be joined with t5. This opens up new join orders. That being said, join factorization imposes certain join orders. For example, in Q2, t2 and t3 appear in the first branch of the UNION ALL query in view VW_JF_1. T2 must be joined with t3 before it can be joined with t1 which is outside of the VW_JF_1 view. The imposed join order may not necessarily be the best join order. For this reason, join factorization is performed under cost-based transformation framework; this means that we cost the plans with and without join factorization and choose the cheapest plan. Note that if the branches in UNION ALL have DISTINCT clauses, join factorization is not valid. For example, Q4 is NOT semantically equivalent to Q5.   Q4:     select distinct t1.*      from t1, t2      where t1.c1 = t2.c1  union all      select distinct t1.*      from t1, t2      where t1.c1 = t2.c1 Q5:    select distinct t1.*     from t1, (select t2.c1 item_1                   from t2                union all                   select t2.c1 item_1                  from t2) VW_JF_1     where t1.c1 = VW_JF_1.item_1 Q4 might return more rows than Q5. Q5's results are guaranteed to be duplicate free because of the DISTINCT key word at the top level while Q4's results might contain duplicates.   The examples given so far involve inner joins only. Join factorization is also supported in outer join, anti join and semi join. But only the right tables of outer join, anti join and semi joins can be factorized. It is not semantically correct to factorize the left table of outer join, anti join or semi join. For example, Q6 is NOT semantically equivalent to Q7. Q6:     select t1.c1, t2.c2    from t1, t2    where t1.c1 = t2.c1(+) and t2.c2 (+) = 2  union all    select t1.c1, t2.c2    from t1, t2      where t1.c1 = t2.c1(+) and t2.c2 (+) = 3 Q7:     select t1.c1, VW_JF_1.item_2    from t1, (select t2.c1 item_1, t2.c2 item_2                  from t2                  where t2.c2 = 2                union all                  select t2.c1 item_1, t2.c2 item_2                  from t2                                                                                                    where t2.c2 = 3) VW_JF_1       where t1.c1 = VW_JF_1.item_1(+)                                                                  However, the right side of an outer join can be factorized. For example, join factorization can transform Q8 to Q9 by factorizing t2, which is the right table of an outer join. Q8:    select t1.c2, t2.c2    from t1, t2      where t1.c1 = t2.c1 (+) and t1.c1 = 1 union all    select t1.c2, t2.c2    from t1, t2    where t1.c1 = t2.c1(+) and t1.c1 = 2 Q9:   select VW_JF_1.item_2, t2.c2   from t2,             (select t1.c1 item_1, t1.c2 item_2            from t1            where t1.c1 = 1           union all            select t1.c1 item_1, t1.c2 item_2            from t1            where t1.c1 = 2) VW_JF_1   where VW_JF_1.item_1 = t2.c1(+) All of the examples in this blog show factorizing a single table from two branches. This is just for ease of illustration. Join factorization can factorize multiple tables and from more than two UNION ALL branches.  SummaryJoin factorization is a cost-based transformation. It can factorize common computations from branches in a UNION ALL query which can lead to huge performance improvement. 

    Read the article

  • 9/18 Live Webcast: Three Compelling Reasons to Upgrade to Oracle Database 11g - Still time to register

    - by jgelhaus
    If you or your organization is still working with Oracle Database 10g or an even older version, now is the time to upgrade. Oracle Database 11g offers a wide variety of advantages to enhance your operation. Join us 10 am PT / 1pm ET September 18th for this live Webcast and learn about what you’re missing: the business, operational, and technical benefits. With Oracle Database 11g, you can: Upgrade with zero downtime Improve application performance and database security Reduce the amount of storage required Save time and money Register today 

    Read the article

  • how to troubleshoot using rsyslog to output to a mysql database

    - by ChrisNZ
    Using FreeBSD 8.0 32 bit. I have installed rsyslogd 5.5.5 with ommysql. (installed ports /usr/ports/sysutils/rsyslog55 and /usr/ports/sysutils/rsyslog55-mysql) My rsyslog.conf file looks like: $ModLoad imudp $ModLoad imtcp $ModLoad ommysql $ModLoad immark.so $ModLoad imuxsock.so $ModLoad imklog.so $OptimizeForUniprocessor on $AllowedSender UDP, 10.0.0.0/8 $UDPServerAddress 0.0.0.0 $UDPServerRun 514 $UDPServerTimeRequery 2 # +SG560 *.* :ommysql:127.0.0.1,Syslog,sysloguser,mypassword My command line flags for rsyslogd are: -c5 -4 Checking the code with -c5 -N1 returns no errors. I have confirmed that rsyslogd is working by changing the last line to say: *.* /var/log/snapgear.log which results in messages appearing in the snapgear.log file. So it is probably something to do with my MySQL setup If I do: mysql -u sysloguser -p Syslog Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 56 Server version: 5.0.86 FreeBSD port: mysql-server-5.0.86 mysql> select * from SystemEvents; Empty set (0.00 sec) mysql> :-( I have confirmed that sysloguser has full privileges for the Syslog database. If I run rsyslogd on the console in debug mode: /usr/local/sbin/rsyslogd -f /usr/local/etc/rsyslog.conf -c5 -n -d I can see this sequence of events each time a message is received: 9244.376687256:28359280: main Q: entry added, size now log 1, phys 1 entries 9244.376705694:28359280: main Q: EnqueueMsg advised worker start 9244.376726647:28359280: Listening on UDP syslogd socket 4 (IPv4/port 514). 9244.376728602:28359280: --------imUDP calling select, active file descriptors (max 4): 4 9244.376890075:283593c0: wti 0x28306e80: worker awoke from idle processing 9244.376892031:283593c0: we deleted 0 objects and enqueued 0 objects 9244.376893986:283593c0: delete batch from store, new sizes: log 1, phys 1 9244.376895942:283593c0: msgConsumer processes msg 0/1 9244.376897898:283593c0: msg parser: flags 70, from '~NOTRESOLVED~', msg 'Jun 29 17:32:24 SG560 kernel: (20000629T1732244' 9244.376900132:283593c0: parse using parser list 0x283080e8 (the default list). 9244.376902088:283593c0: dropped LF at very end of message (DropTrailingLF is set) 9244.376904044:283593c0: Parser 'rsyslog.rfc5424' returned -2160 9244.376905999:283593c0: Message will now be parsed by the legacy syslog parser (one size fits all... ;)). 9244.376907955:283593c0: Parser 'rsyslog.rfc3164' returned 0 9244.376909910:283593c0: testing filter, f_pmask 255 9244.376911866:283593c0: Called action, logging to ommysql 9244.376918012:283593c0: actionTryResume: action state: susp, next retry (if applicable): 1277869250 [now 1277869244] 9244.376919967:283593c0: action call returned -2123 9244.376921923:283593c0: tryDoAction: unexpected error code -2123, finalizing 9244.376926113:283593c0: actionTryResume: action state: susp, next retry (if applicable): 1277869250 [now 1277869244] 9244.376928069:283593c0: ruleset: get iRet 0 from rule.ProcessMsg() 9244.376930024:283593c0: ruleset.ProcessMsg() returns 0 9244.376931980:283593c0: regular consumer finished, iret=0, szlog 0 sz phys 1 9244.376933936:283593c0: XXX: enqueueing data element 0 of 1 9244.376935891:283593c0: we deleted 1 objects and enqueued 0 objects 9244.376938126:283593c0: delete batch from store, new sizes: log 0, phys 0 9244.376940082:283593c0: regular consumer finished, iret=4, szlog 0 sz phys 0 9244.376942037:283593c0: main Q:Reg/w0: worker IDLE, waiting for work. .... I can see the Action Call to ommysql returns unexpected error code -2123 Now I am stuck! Any ideas on what to look for next? Perhaps I there are extra ports I need to install? I will be very grateful for any assistance here!

    Read the article

  • Delivery of JMS message before the transaction is committed

    - by ewernli
    Hi, I have a very simple scenario involving a database and a JMS in an application server (Glassfish). The scenario is dead simple: 1. an EJB inserts a row in the database and sends a message. 2. when the message is delivered with an MDB, the row is read and updated. The problem is that sometimes the message is delivered before the insert has been committed in the database. This is actually understandable if we consider the 2 phase commit protocol: 1. prepare JMS 2. prepare database 3. commit JMS 4. ( tiny little gap where message can be delivered before insert has been committed) 5. commit database I've discussed this problem with others, but the answer was always: "Strange, it should work out of the box". My questions are then: How could it work out-of-the box? My scenario sounds fairly simple, why isn't there more people with similar troubles? Am I doing something wrong? Is there a way to solve this issue correctly? Here are a bit more details about my understanding of the problem: This timing issue exist only if the participant are treated in this order. If the 2PC treats the participants in the reverse order (database first then message broker) that should be fine. The problem was randomly happening but completely reproducible. I found no way to control the order of the participants in the distributed transactions in the JTA, JCA and JPA specifications neither in the Glassfish documentation. We could assume they will be enlisted in the distributed transaction according to the order when they are used, but with an ORM such as JPA, it's difficult to know when the data are flushed and when the database connection is really used. Any idea?

    Read the article

  • Android: NullPointerException error in getting data in database

    - by Gil Viernes Marcelo
    This what happens in the system. 1. Admin login this is in other activity but i will not post it coz it has nothing to do with this (no problem) 2. Register user in system (using database no problem) 3. Click add user button (where existing user who register must display its name in ListView) Problem: When I click adduser to see if the system registered the user, it force close. CurrentUser.java package com.example.istronggyminstructor; import java.util.ArrayList; import android.os.Bundle; import android.app.Activity; import android.content.Context; import android.content.Intent; import android.database.Cursor; import android.view.Gravity; import android.view.LayoutInflater; import android.view.Menu; import android.view.View; import android.view.View.OnClickListener; import android.view.ViewGroup; import android.view.ViewGroup.LayoutParams; import android.view.WindowManager; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.EditText; import android.widget.FrameLayout; import android.widget.ListView; import android.widget.PopupWindow; import android.widget.TextView; import android.widget.Toast; import java.util.ArrayList; import java.util.List; import java.util.Random; import com.example.istronggyminstructor.registeredUserList.Users; import android.content.ContentValues; import android.database.Cursor; import android.database.SQLException; import android.database.sqlite.SQLiteDatabase; public class CurrentUsers extends Activity { private Button register; private Button adduser; EditText getusertext, getpass, getweight, textdisp; View popupview, popupview2; public static ArrayList<String> ArrayofName = new ArrayList<String>(); protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_current_users); register = (Button) findViewById(R.id.regbut); adduser = (Button) findViewById(R.id.addbut); register.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { LayoutInflater inflator = (LayoutInflater) getBaseContext() .getSystemService(LAYOUT_INFLATER_SERVICE); popupview = inflator.inflate(R.layout.popup, null); final PopupWindow popupWindow = new PopupWindow(popupview, LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT); popupWindow.showAtLocation(popupview, Gravity.CENTER, 0, 0); popupWindow.setFocusable(true); popupWindow.update(); Button dismissbtn = (Button) popupview.findViewById(R.id.close); dismissbtn.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { popupWindow.dismiss(); } }); popupWindow.showAsDropDown(register, 50, -30); } }); //Thread.setDefaultUncaughtExceptionHandler(new forceclose(this)); } public void registerUser(View v) { EditText username = (EditText) popupview.findViewById(R.id.usertext); EditText password = (EditText) popupview .findViewById(R.id.passwordtext); EditText weight = (EditText) popupview.findViewById(R.id.weight); String getUsername = username.getText().toString(); String getPassword = password.getText().toString(); String getWeight = weight.getText().toString(); dataHandler dbHandler = new dataHandler(this, null, null, 1); Users user = new Users(getUsername, getPassword, Integer.parseInt(getWeight)); dbHandler.addUsers(user); Toast.makeText(getApplicationContext(), "Registering...", Toast.LENGTH_SHORT).show(); } public void onClick_addUser(View v) { LayoutInflater inflator = (LayoutInflater) getBaseContext() .getSystemService(LAYOUT_INFLATER_SERVICE); popupview2 = inflator.inflate(R.layout.popup2, null); final PopupWindow popupWindow = new PopupWindow(popupview2, LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT); popupWindow.showAtLocation(popupview2, Gravity.CENTER, 0, -10); popupWindow.setFocusable(true); popupWindow.update(); Button dismissbtn = (Button) popupview2.findViewById(R.id.close2); dismissbtn.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { popupWindow.dismiss(); } }); popupWindow.showAsDropDown(register, 50, -30); dataHandler dbHandler = new dataHandler(this, null, null, 1); dbHandler.getAllUsers(); ListView list = (ListView)findViewById(R.layout.popup2); ArrayAdapter<String> adapter = new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, ArrayofName); list.setAdapter(adapter); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.current_users, menu); return true; } } registeredUserList.java package com.example.istronggyminstructor; public class registeredUserList { public static class Users { private static int _id; private static String _users; private static String _password; private static int _weight; private static String[] _workoutlists; private static int _score; public Users() { } public Users(String username, String password, int weight) { _users = username; _password = password; _weight = weight; } public int getId() { return _id; } public static void setId(int id) { _id = id; } public String getUsers() { return _users; } public static void setUsers(String users) { _users = users; } public String getPassword(){ return _password; } public void setPassword(String password){ _password = password; } public int getWeight(){ return _weight; } public static void setWeight(int weight){ _weight = weight; } public String[] getWorkoutLists(){ return _workoutlists; } public void setWorkoutLists(String[] workoutlists){ _workoutlists = workoutlists; } public int score(){ return _score; } public void score(int score){ _score = score; } } } dataHandler.java package com.example.istronggyminstructor; import java.util.ArrayList; import java.util.List; import com.example.istronggyminstructor.registeredUserList.Users; import android.content.ContentValues; import android.content.Context; import android.database.Cursor; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteDatabase.CursorFactory; import android.database.sqlite.SQLiteOpenHelper; public class dataHandler extends SQLiteOpenHelper { private static final int DATABASE_VERSION = 1; private static final String DATABASE_NAME = "userInfo.db"; public static final String TABLE_USERINFO = "user"; public static final String COLUMN_ID = "_id"; public static final String COLUMN_USERNAME = "username"; public static final String COLUMN_PASSWORD = "password"; public static final String COLUMN_WEIGHT = "weight"; public dataHandler(Context context, String name, CursorFactory factory, int version) { super(context, DATABASE_NAME, factory, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { String CREATE_USER_TABLE = "CREATE TABLE " + TABLE_USERINFO + " (" + COLUMN_ID + " INTEGER PRIMARY KEY, " + COLUMN_USERNAME + " TEXT," + COLUMN_PASSWORD + " TEXT, " + COLUMN_WEIGHT + " INTEGER " + ");"; db.execSQL(CREATE_USER_TABLE); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { db.execSQL("DROP TABLE IF EXISTS " + TABLE_USERINFO); onCreate(db); } public void addUsers(Users user) { ContentValues values = new ContentValues(); values.put(COLUMN_USERNAME, user.getUsers()); values.put(COLUMN_PASSWORD, user.getPassword()); values.put(COLUMN_WEIGHT, user.getWeight()); SQLiteDatabase db = this.getWritableDatabase(); db.insert(TABLE_USERINFO, null, values); db.close(); } public Users findUsers(String username) { String query = "Select * FROM " + TABLE_USERINFO + " WHERE " + COLUMN_USERNAME + " = \"" + username + "\""; SQLiteDatabase db = this.getWritableDatabase(); Cursor cursor = db.rawQuery(query, null); Users user = new Users(); if (cursor.moveToFirst()) { cursor.moveToFirst(); Users.setUsers(cursor.getString(1)); //Users.setWeight(Integer.parseInt(cursor.getString(3))); not yet needed cursor.close(); } else { user = null; } db.close(); return user; } public List<Users> getAllUsers(){ List<Users> user = new ArrayList(); String selectQuery = "SELECT * FROM " + TABLE_USERINFO; SQLiteDatabase db = this.getWritableDatabase(); Cursor cursor = db.rawQuery(selectQuery, null); if (cursor.moveToFirst()) { do { Users users = new Users(); users.setUsers(cursor.getString(1)); String name = cursor.getString(1); CurrentUsers.ArrayofName.add(name); // Adding contact to list user.add(users); } while (cursor.moveToNext()); } // return user list return user; } public boolean deleteUsers(String username) { boolean result = false; String query = "Select * FROM " + TABLE_USERINFO + " WHERE " + COLUMN_USERNAME + " = \"" + username + "\""; SQLiteDatabase db = this.getWritableDatabase(); Cursor cursor = db.rawQuery(query, null); Users user = new Users(); if (cursor.moveToFirst()) { Users.setId(Integer.parseInt(cursor.getString(0))); db.delete(TABLE_USERINFO, COLUMN_ID + " = ?", new String[] { String.valueOf(user.getId()) }); cursor.close(); result = true; } db.close(); return result; } } Logcat 08-20 03:23:23.293: E/AndroidRuntime(16363): FATAL EXCEPTION: main 08-20 03:23:23.293: E/AndroidRuntime(16363): java.lang.IllegalStateException: Could not execute method of the activity 08-20 03:23:23.293: E/AndroidRuntime(16363): at android.view.View$1.onClick(View.java:3599) 08-20 03:23:23.293: E/AndroidRuntime(16363): at android.view.View.performClick(View.java:4204) 08-20 03:23:23.293: E/AndroidRuntime(16363): at android.view.View$PerformClick.run(View.java:17355) 08-20 03:23:23.293: E/AndroidRuntime(16363): at android.os.Handler.handleCallback(Handler.java:725) 08-20 03:23:23.293: E/AndroidRuntime(16363): at android.os.Handler.dispatchMessage(Handler.java:92) 08-20 03:23:23.293: E/AndroidRuntime(16363): at android.os.Looper.loop(Looper.java:137) 08-20 03:23:23.293: E/AndroidRuntime(16363): at android.app.ActivityThread.main(ActivityThread.java:5041) 08-20 03:23:23.293: E/AndroidRuntime(16363): at java.lang.reflect.Method.invokeNative(Native Method) 08-20 03:23:23.293: E/AndroidRuntime(16363): at java.lang.reflect.Method.invoke(Method.java:511) 08-20 03:23:23.293: E/AndroidRuntime(16363): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:793) 08-20 03:23:23.293: E/AndroidRuntime(16363): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:560) 08-20 03:23:23.293: E/AndroidRuntime(16363): at dalvik.system.NativeStart.main(Native Method) 08-20 03:23:23.293: E/AndroidRuntime(16363): Caused by: java.lang.reflect.InvocationTargetException 08-20 03:23:23.293: E/AndroidRuntime(16363): at java.lang.reflect.Method.invokeNative(Native Method) 08-20 03:23:23.293: E/AndroidRuntime(16363): at java.lang.reflect.Method.invoke(Method.java:511) 08-20 03:23:23.293: E/AndroidRuntime(16363): at android.view.View$1.onClick(View.java:3594) 08-20 03:23:23.293: E/AndroidRuntime(16363): ... 11 more 08-20 03:23:23.293: E/AndroidRuntime(16363): Caused by: java.lang.NullPointerException 08-20 03:23:23.293: E/AndroidRuntime(16363): at com.example.istronggyminstructor.CurrentUsers.onClick_addUser(CurrentUsers.java:118) 08-20 03:23:23.293: E/AndroidRuntime(16363): ... 14 more

    Read the article

  • Form values appear blank when submitting to the database - Drupal FormAPI

    - by GaxZE
    Hello, I have been working on this drupal form API script for past week and half. to give an insight into my problem.. the form below merely lists a host of database records which contain 5 individual scoring ranks. (mind, action, relationship, language and IT). this code is apart of my own custom module where all values are listed from the database. the idea behind this module is to be able to edit these values on a large scale. I am having trouble getting the values entered in the form to be passed to the variables inside of the marli_admin_submit function. the second problem is the assigning those values to their specific ID. for this purpose id like to add im merely trying to get just one score updated rather than all of them. below is my code. any advice appreciated. function marli_scores(){ $result = pager_query(db_rewrite_sql('SELECT * FROM marli WHERE value != " "')); while ($node = db_fetch_object($result)) { $attribute = $node->attribute; $field = $node->field_name; $item = $node->value; $mind = $node->mind; $action = $node->action; $relationship = $node->relationship; $language = $node->language; $it = $node->it; $form['field'][$node->marli_id] = array('#type' => 'markup', '#value' => $field, '#prefix' => '<b>', '#suffix' => '</b>'); $form['title'][$node->marli_id] = array('#type' => 'markup', '#value' => $item, '#prefix' => '<b>', '#suffix' => '</b>'); $form['mind'][$node->marli_id] = array('#type' => 'textfield', '#maxlength' => '1', '#size' => '1', '#value' => $mind); $form['action'][$node->marli_id] = array('#type' => 'textfield', '#maxlength' => '1', '#size' => '1', '#value' => $action); $form['relationship'][$node->marli_id] = array('#type' => 'textfield', '#maxlength' => '1', '#size' => '1', '#value' => $relationship); $form['language'][$node->marli_id] = array('#type' => 'textfield', '#maxlength' => '1', '#size' => '1', '#value' => $language); $form['it'][$node->marli_id] = array('#type' => 'textfield', '#maxlength' => '1', '#size' => '1', '#value' => $it); } $form['pager'] = array('#value' => theme('pager', NULL, 50, 0)); $form['save'] = array('#type' => 'submit', '#value' => t('Save')); $form['#theme'] = 'marli_scores'; return $form; } function marli_admin_submit($form, &$form_state) { $marli_id = 4; $submit_mind = $form_state['values']['mind'][$marli_id]; $submit_action = $form_state['values']['action'][$marli_id]; $submit_relationship = $form_state['values']['relationship'][$marli_id]; $submit_language = $form_state['values']['language'][$marli_id]; $submit_it = $form_state['values']['it'][$marli_id]; $sql_query = "UPDATE {marli} SET mind = %d, action = %d, relationship = %d, language = %d, it = %d WHERE marli_id = %d"; if ($success = db_query($sql_query, $submit_mind, $submit_action, $submit_relationship, $submit_language, $submit_it)) { drupal_set_message(t(' Values have been saved.')); } else { drupal_set_message(t('There was an error saving your data. Please try again.')); } }

    Read the article

  • Transitioning from Domain Authentication to SQL Server Authentication

    - by Albert Perrien
    Greetings all, I've run into a problem that has me stumped. I've put together a database in SQL Server Express, and I'm having a strange permissions problem. The database is on my development machine with a domain user: DOMAIN\albertp. My development database server is set for "SQL Server and Windows Authentication" mode. I can edit and query my database without any problems when I log in using Windows Authentication. However, when I log in to any user that uses SQL Server authentication (Including sa) I get this message when I run queries against my database. SELECT * FROM [Testing].[dbo].[AuditingReport] I get: Msg 18456, Level 14, State 1, Line 1 Login failed for user 'auditor'. I'm logged into the server from SQL Server Management Studio as 'auditor' and I don't see anything in the error log about the login failure. I've already run: Use Testing; Grant All to auditor; Go And I still get the same error. What permissions do I have to set for the database to be usable by others outside of my personal domain login? Or am I looking at the wrong problem? My ultimate goal is to have the database be accessible from a set of PHP pages, using a either a common login (hence 'auditor') or a login specific to a set of individual users.

    Read the article

  • Can't view database on SQL Server 2008 with domain user

    - by abatishchev
    I created a login for a domain user (domain admin) and added it to role serveradmin, but after logging in I still can't list databases getting next error: The database MyDB is not accessible. (ObjectExplorer) Program Location: at Microsoft.SqlServer.Management.UI.VSIntegration.ObjectExplorer.DatabaseNavigableItem.get_CanGetChildren() at Microsoft.SqlServer.Management.UI.VSIntegration.ObjectExplorer.NavigableItem.GetChildren(IGetChildrenRequest request) at Microsoft.SqlServer.Management.UI.VSIntegration.ObjectExplorer.ExplorerHierarchyNode.BuildChildren(WaitHandle quitEvent) How can I fix that?

    Read the article

  • Migrate from MySQL to PostgreSQL on Linux (Kubuntu)

    - by Dave Jarvis
    Storyline Trying to migrate a database from MySQL to PostgreSQL. All the documentation I have read covers, in great detail, how to migrate the structure. I have found very little documentation on migrating the data. The schema has 13 tables (which have been migrated successfully) and 9 GB of data. MySQL version: 5.1.x PostgreSQL version: 8.4.x I want to use the R programming language to analyze the data using SQL select statements; PostgreSQL has PL/R, but MySQL has nothing (as far as I can tell). A long time ago in a galaxy far, far away... Create the database location (/var has insufficient space; also dislike having the PostgreSQL version number everywhere -- upgrading would break scripts!): sudo mkdir -p /home/postgres/main sudo cp -Rp /var/lib/postgresql/8.4/main /home/postgres sudo chown -R postgres.postgres /home/postgres sudo chmod -R 700 /home/postgres sudo usermod -d /home/postgres/ postgres All good to here. Next, restart the server and configure the database using these installation instructions: sudo apt-get install postgresql pgadmin3 sudo /etc/init.d/postgresql-8.4 stop sudo vi /etc/postgresql/8.4/main/postgresql.conf Change data_directory to /home/postgres/main sudo /etc/init.d/postgresql-8.4 start sudo -u postgres psql postgres \password postgres sudo -u postgres createdb climate pgadmin3 Use pgadmin3 to configure the database and create a schema. A New Hope The episode began in a remote shell known as bash, with both databases running, and the installation of a command with a most unusual logo: SQL Fairy. perl Makefile.PL sudo make install sudo apt-get install perl-doc (strangely, it is not called perldoc) perldoc SQL::Translator::Manual Extract a PostgreSQL-friendly DDL and all the MySQL data: sqlt -f DBI --dsn dbi:mysql:climate --db-user user --db-password password -t PostgreSQL > climate-pg-ddl.sql mysqldump --skip-add-locks --complete-insert --no-create-db --no-create-info --quick --result-file="climate-my.sql" --databases climate --skip-comments -u root -p The Database Strikes Back Recreate the structure in PostgreSQL as follows: pgadmin3 (switch to it) Click the Execute arbitrary SQL queries icon Open climate-pg-ddl.sql Search for TABLE " replace with TABLE climate." (insert the schema name climate) Search for on " replace with on climate." (insert the schema name climate) Press F5 to execute This results in: Query returned successfully with no result in 122 ms. Replies of the Jedi At this point I am stumped. Where do I go from here (what are the steps) to convert climate-my.sql to climate-pg.sql so that they can be executed against PostgreSQL? How to I make sure the indexes are copied over correctly (to maintain referential integrity; I don't have constraints at the moment to ease the transition)? How do I ensure that adding new rows in PostgreSQL will start enumerating from the index of the last row inserted (and not conflict with an existing primary key from the sequence)? Resources A fair bit of information was needed to get this far: https://help.ubuntu.com/community/PostgreSQL http://articles.sitepoint.com/article/site-mysql-postgresql-1 http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL http://pgfoundry.org/frs/shownotes.php?release_id=810 http://sqlfairy.sourceforge.net/ Thank you!

    Read the article

  • RT database scaling

    - by rplevy
    Recently I heard someone suggest that RT request tracker may have scalability issues due to its non-normalized database (someone at a Perl meeting I went to referred to it in a positive light as hyper-normalized, but I think he may have misunderstood what normalization is all about). On the other hand I know that large scale enterprises such as Perl's CPAN use RT. Do es this level of scale require special measures to be taken to handle what happens when the db grows too large? What have your experiences been?

    Read the article

  • EM12c Release 4: New Compliance features including DB STIG Standard

    - by DaveWolf
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Enterprise Manager’s compliance framework is a powerful and robust feature that provides users the ability to continuously validate their target configurations against a specified standard. Enterprise Manager’s compliance library is filled with a wide variety of standards based on Oracle’s recommendations, best practices and security guidelines. These standards can be easily associated to a target to generate a report showing its degree of conformance to that standard. ( To get an overview of  Database compliance management in Enterprise Manager see this screenwatch. ) Starting with release 12.1.0.4 of Enterprise Manager the compliance library will contain a new standard based on the US Defense Information Systems Agency (DISA) Security Technical Implementation Guide (STIG) for Oracle Database 11g. According to the DISA website, “The STIGs contain technical guidance to ‘lock down’ information systems/software that might otherwise be vulnerable to a malicious computer attack.” In essence, a STIG is a technical checklist an administrator can follow to secure a system or software. Many US government entities are required to follow these standards however many non-US government entities and commercial companies base their standards directly or partially on these STIGs. You can find more information about the Oracle Database and other STIGs on the DISA website. The Oracle Database 11g STIG consists of two categories of checks, installation and instance. Installation checks focus primarily on the security of the Oracle Home while the instance checks focus on the configuration of the running database instance itself. If you view the STIG compliance standard in Enterprise Manager, you will see the rules organized into folders corresponding to these categories. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The rule names contain a rule ID ( DG0020 for example ) which directly map to the check name in the STIG checklist along with a helpful brief description. The actual description field contains the text from the STIG documentation to aid in understanding the purpose of the check. All of the rules have also been documented in the Oracle Database Compliance Standards reference documentation. In order to use this standard both the OMS and agent must be at version 12.1.0.4 as it takes advantage of several features new in this release including: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Agent-Side Compliance Rules Manual Compliance Rules Violation Suppression Additional BI Publisher Compliance Reports /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Agent-Side Compliance Rules Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Agent-side compliance rules are essentially the result of a tighter integration between Configuration Extensions and Compliance Rules. If you ever created customer compliance content in past versions of Enterprise Manager, you likely used Configuration Extensions to collect additional information into the EM repository so it could be used in a Repository compliance rule. This process although powerful, could be confusing to correctly model the SQL in the rule creation wizard. With agent-side rules, the user only needs to choose the Configuration Extension/Alias combination and that’s it. Enterprise Manager will do the rest for you. This tighter integration also means their lifecycle is managed together. When you associate an agent-side compliance standard to a target, the required Configuration Extensions will be deployed automatically for you. The opposite is also true, when you unassociated the compliance standard, the Configuration Extensions will also be undeployed. The Oracle Database STIG compliance standard is implemented as an agent-side standard which is why you simply need to associate the standard to your database targets without previously deploying the associated Configuration Extensions. You can learn more about using Agent-Side compliance rules in the screenwatch Using Agent-Side Compliance Rules on Enterprise Manager's Lifecycle Management page on OTN. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Manual Compliance Rules Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} There are many checks in the Oracle Database STIG as well as other common standards which simply cannot be automated. This could be something as simple as “Ensure the datacenter entrance is secured.” or complex as Oracle Database STIG Rule DG0186 – “The database should not be directly accessible from public or unauthorized networks”. These checks require a human to perform and attest to its successful completion. Enterprise Manager now supports these types of checks in Manual rules. When first associated to a target, each manual rule will generate a single violation. These violations must be manually cleared by a user who is in essence attesting to its successful completion. The user is able to permanently clear the violation or give a future date on which the violation will be regenerated. Setting a future date is useful when policy dictates a periodic re-validation of conformance wherein the user will have to reperform the check. The optional reason field gives the user an opportunity to provide details of the check results. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Violation Suppression There are situations that require the need to permanently or temporarily suppress a legitimate violation or finding. These include approved exceptions and grace periods. Enterprise Manager now supports the ability to temporarily or permanently suppress a violation. Unlike when you clear a manual rule violation, suppression simply removes the violation from the compliance results UI and in turn its negative impact on the score. The violation still remains in the EM repository and can be accounted for in compliance reports. Temporarily suppressing a violation can give users a grace period in which to address an issue. If the issue is not addressed within the specified period, the violation will reappear in the results automatically. Again the user may enter a reason for the suppression which will be permanently saved with the event along with the suppressing user ID. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Additional BI Publisher compliance reports As I am sure you have learned by now, BI Publisher now ships and is integrated with Enterprise Manager 12.1.0.4. This means users can take full advantage of the powerful reporting engine by using the Oracle provided reports or building their own. There are many new compliance related reports available in 12.1.0.4 covering all aspects including the association status, library as well as summary and detailed results reports.  10 New Compliance Reports Compliance Summary Report Example showing STIG results Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Conclusion Together with the Oracle Database 11g STIG compliance standard these features provide a complete solution for easily auditing and reporting the security posture of your Oracle Databases against this well known benchmark. You can view an overview presentation and demo in the screenwatch Using the STIG Compliance Standard on Enterprise Manager's Lifecycle Management page on OTN. Additional EM12c Compliance Management Information Compliance Management - Overview ( Presentation ) Compliance Management - Custom Compliance on Default Data (How To) Compliance Management - Custom Compliance using SQL Configuration Extension (How To) Compliance Management - Customer Compliance using Command Configuration Extension (How To)

    Read the article

  • Response.TransmitFile problem

    - by geoff
    I have the following code delivering a file to users when they click on a download link. For security purposes I can't just link directly to the file so this was set up to decode the url and transmit the file. It has been working fine for a while but recently I started having problems where the file will start downloading but there's no indication of how large the file is. Because of this when the download should stop, it doesn't. The file is about 99mb but when I download it, the browser just keeps downloading way beyond 100mb. I don't know what it's downloading but if I don't cancel it, it doesn't stop. So, my question is, is there either an alternative to transmitfile or a way to make sure the size of the file is sent also so that it stops at the right time? I don't want to use writefile because I don't want to load the entire file into memory since it's so large. Thanks. Here is the code: string filename = Path.GetFileName(url); context.Response.Buffer = true; context.Response.Charset = ""; context.Response.Cache.SetCacheability(HttpCacheability.NoCache); context.Response.ContentType = "application/x-rar-compressed"; context.Response.AddHeader("content-disposition", "attachment;filename=" + filename); context.Response.TransmitFile(context.Server.MapPath(url)); context.Response.Flush();

    Read the article

  • Stand alone or free application to backup ADAM / AD LDS database files

    - by Darqer
    Do you know any small standalone and free tool, that can be run in console, to backup / restore ADAM / AD LDS database files (like adamntds.dit, edbres00001.jrs etc.). I tried to stop ADAM service and copy / paste these files to other location but afterwards I was unable to restore ADAM from these files. I know I could use on ws 2003 some backup tool that was provided by microsoft but it seems to be unavailable on ws 2008.

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >