Search Results

Search found 25747 results on 1030 pages for 'tree view'.

Page 657/1030 | < Previous Page | 653 654 655 656 657 658 659 660 661 662 663 664  | Next Page >

  • Aventail VPN connect on Mac OS X 10.6.2 Snow Leopard

    - by Warlax
    Hello, I am running Mac OS X Snow Leopard (10.6.2). Recently, my company switched (for whatever odd reason) from Cisco VPN (that used to work fine) to Aventail VPN. I proceeded to install the Aventail VPN client on both the mac and a Windows 7 machine, both on my home network. When I try to connect one or the other (I make sure one is disconnected before connecting through the other machine), I get a connection and view the correct certificate - accept it and Aventail tells me that I am connected. However, accessing any page inside my company's network is only possible on Windows. On the Mac I get the following page: http://grab.by/2WOA It looks like my ISP doesn't know how to redirect me? Maybe something about my DNS being set incorrectly on the Mac? Our helpdesk has been completely useless and I was hoping fellow super users can help. Thanks.

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • How does one get Vista to display .MOV thumbnails as a frame from the movie?

    - by Paul Hollingsworth
    I've discovered that Windows Vista explorer shell does not display proper "thumbnail"s for .MOV files. e.g. For .JPG files, the explorer shell, when in Thumbnail mode, displays a scaled down version of the image. But for .MOV files, it displays the same icon as in the non-thumbnail view, which is just some generic thing that shows you it's going to be played with Quicktime or whatever. What XP used to do was show you a frame from the actual movie as the thumbnail. I've looked on internet and found various proposed solutions, but none which actually work. Is there a simple step-by-step process for getting Vista to properly display thumbnails for .MOV files?

    Read the article

  • What kind of users stories should be written in the initial stages of a project?

    - by Domenic
    When just starting a project, you have nothing---no UI, no data layer, nothing in between. Thus, a single story like "users should be able to view their foos" will entail a lot of work. Once you have that story, one like "users should be able to edit their foos" is more realistic, but that first story will involve setting up a UI layer, a presentation logic layer, a domain logic layer, and a data access layer. This doesn't fit with my concept of "tasks": to me, I'd rather have something like the following "tasks": Show dummy data for a user's foos in HTML, derived from JavaScript objects. Set up a presentation logic layer, and connect the JavaScript objects to it. Set up a domain logic layer, and connect the presentation logic layer to it. Set up a data access layer, and connection the domain logic layer to it. Do all of these fall under the single "story" above? If so, I feel like stories are not a terribly useful framework in the early stages of a project. If so, that's fine---I just want to make sure I'm not missing something, since I'm really trying to learn this agile methodology as best I can.

    Read the article

  • 3 monitors windows xp

    - by c0mrade
    I have a laptop mounted to a dock. When I try to use 2 monitors using dualview everything works fine I get my image across 2 monitors, but when I connect the third one I can't use it although its recognized. Here's what I'd like to do, I'd like to get dual view from my laptop across two monitors, and I have a remote machine connecting to it via remote desktop. I'd like to have that displayed on the third monitor. Is that possible? or some other combination?

    Read the article

  • My New BDD Style

    - by Liam McLennan
    I have made a change to my code-based BDD style. I start with a scenario such as: Pre-Editing * Given I am a book editor * And some chapters are locked and some are not * When I view the list of chapters for editing * Then I should see some chapters are editable and are not locked * And I should see some chapters are not editable and are locked and I implement it using a modified SpecUnit base class as: [Concern("Chapter Editing")] public class when_pre_editing_a_chapter : BaseSpec { private User i; // other context variables protected override void Given() { i_am_a_book_editor(); some_chapters_are_locked_and_some_are_not(); } protected override void Do() { i_view_the_list_of_chapters_for_editing(); } private void i_am_a_book_editor() { i = new UserBuilder().WithUsername("me").WithRole(UserRole.BookEditor).Build(); } private void some_chapters_are_locked_and_some_are_not() { } private void i_view_the_list_of_chapters_for_editing() { } [Observation] public void should_see_some_chapters_are_editable_and_are_not_locked() { } [Observation] public void should_see_some_chapters_are_not_editable_and_are_locked() { } } and the output from the specunit report tool is: Chapter Editing specifications    1 context, 2 specifications Chapter Editing, when pre editing a chapter    2 specifications should see some chapters are editable and are not locked should see some chapters are not editable and are locked The intent is to provide a clear mapping from story –> scenarios –> bdd tests.

    Read the article

  • Error in code of basic game using multiple sprites and surfaceView [on hold]

    - by Khagendra Nath Mahato
    I am a beginner to android and i was trying to make a basic game with the help of an online video tutorial. I am having problem with the multi-sprites and how to use with surfaceview.The application fails launching. Here is the code of the game.please help me. package com.example.killthemall; import java.util.ArrayList; import java.util.List; import java.util.Random; import android.app.Activity; import android.content.Context; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.graphics.Canvas; import android.graphics.Rect; import android.os.Bundle; import android.view.SurfaceHolder; import android.view.SurfaceView; import android.widget.Toast; public class Game extends Activity { KhogenView View1; @Override protected void onPause() { // TODO Auto-generated method stub super.onPause(); while(true){ try { OurThread.join(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); }} } Thread OurThread; int herorows = 4; int herocolumns = 3; int xpos, ypos; int xspeed; int yspeed; int herowidth; int widthnumber = 0; int heroheight; Rect src; Rect dst; int round; Bitmap bmp1; // private Bitmap bmp1;//change name public List<Sprite> sprites = new ArrayList<Sprite>() { }; @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); View1 = new KhogenView(this); setContentView(View1); sprites.add(createSprite(R.drawable.image)); sprites.add(createSprite(R.drawable.bad1)); sprites.add(createSprite(R.drawable.bad2)); sprites.add(createSprite(R.drawable.bad3)); sprites.add(createSprite(R.drawable.bad4)); sprites.add(createSprite(R.drawable.bad5)); sprites.add(createSprite(R.drawable.bad6)); sprites.add(createSprite(R.drawable.good1)); sprites.add(createSprite(R.drawable.good2)); sprites.add(createSprite(R.drawable.good3)); sprites.add(createSprite(R.drawable.good4)); sprites.add(createSprite(R.drawable.good5)); sprites.add(createSprite(R.drawable.good6)); } private Sprite createSprite(int image) { // TODO Auto-generated method stub bmp1 = BitmapFactory.decodeResource(getResources(), image); return new Sprite(this, bmp1); } public class KhogenView extends SurfaceView implements Runnable { SurfaceHolder OurHolder; Canvas canvas = null; Random rnd = new Random(); { xpos = rnd.nextInt(canvas.getWidth() - herowidth)+herowidth; ypos = rnd.nextInt(canvas.getHeight() - heroheight)+heroheight; xspeed = rnd.nextInt(10 - 5) + 5; yspeed = rnd.nextInt(10 - 5) + 5; } public KhogenView(Context context) { super(context); // TODO Auto-generated constructor stub OurHolder = getHolder(); OurThread = new Thread(this); OurThread.start(); } @Override public void run() { // TODO Auto-generated method stub herowidth = bmp1.getWidth() / 3; heroheight = bmp1.getHeight() / 4; boolean isRunning = true; while (isRunning) { if (!OurHolder.getSurface().isValid()) continue; canvas = OurHolder.lockCanvas(); canvas.drawRGB(02, 02, 50); for (Sprite sprite : sprites) { if (widthnumber == 3) widthnumber = 0; update(); getdirection(); src = new Rect(widthnumber * herowidth, round * heroheight, (widthnumber + 1) * herowidth, (round + 1)* heroheight); dst = new Rect(xpos, ypos, xpos + herowidth, ypos+ heroheight); canvas.drawBitmap(bmp1, src, dst, null); } widthnumber++; OurHolder.unlockCanvasAndPost(canvas); } } public void update() { try { Thread.sleep(1000); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } if (xpos + xspeed <= 0) xspeed = 40; if (xpos >= canvas.getWidth() - herowidth) xspeed = -50; if (ypos + yspeed <= 0) yspeed = 45; if (ypos >= canvas.getHeight() - heroheight) yspeed = -55; xpos = xpos + xspeed; ypos = ypos + yspeed; } public void getdirection() { double angleinteger = (Math.atan2(yspeed, xspeed)) / (Math.PI / 2); round = (int) (Math.round(angleinteger) + 2) % herorows; // Toast.makeText(this, String.valueOf(round), // Toast.LENGTH_LONG).show(); } } public class Sprite { Game game; private Bitmap bmp; public Sprite(Game game, Bitmap bmp) { // TODO Auto-generated constructor stub this.game = game; this.bmp = bmp; } } } Here is the LogCat if it helps.... 08-22 23:18:06.980: D/AndroidRuntime(28151): Shutting down VM 08-22 23:18:06.980: W/dalvikvm(28151): threadid=1: thread exiting with uncaught exception (group=0xb3f6f4f0) 08-22 23:18:06.980: D/AndroidRuntime(28151): procName from cmdline: com.example.killthemall 08-22 23:18:06.980: E/AndroidRuntime(28151): in writeCrashedAppName, pkgName :com.example.killthemall 08-22 23:18:06.980: D/AndroidRuntime(28151): file written successfully with content: com.example.killthemall StringBuffer : ;com.example.killthemall 08-22 23:18:06.990: I/Process(28151): Sending signal. PID: 28151 SIG: 9 08-22 23:18:06.990: E/AndroidRuntime(28151): FATAL EXCEPTION: main 08-22 23:18:06.990: E/AndroidRuntime(28151): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.killthemall/com.example.killthemall.Game}: java.lang.NullPointerException 08-22 23:18:06.990: E/AndroidRuntime(28151): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1647) 08-22 23:18:06.990: E/AndroidRuntime(28151): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1663) 08-22 23:18:06.990: E/AndroidRuntime(28151): at android.app.ActivityThread.access$1500(ActivityThread.java:117) 08-22 23:18:06.990: E/AndroidRuntime(28151): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:931) 08-22 23:18:06.990: E/AndroidRuntime(28151): at android.os.Handler.dispatchMessage(Handler.java:99) 08-22 23:18:06.990: E/AndroidRuntime(28151): at android.os.Looper.loop(Looper.java:130) 08-22 23:18:06.990: E/AndroidRuntime(28151): at android.app.ActivityThread.main(ActivityThread.java:3683) 08-22 23:18:06.990: E/AndroidRuntime(28151): at java.lang.reflect.Method.invokeNative(Native Method) 08-22 23:18:06.990: E/AndroidRuntime(28151): at java.lang.reflect.Method.invoke(Method.java:507) 08-22 23:18:06.990: E/AndroidRuntime(28151): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:880) 08-22 23:18:06.990: E/AndroidRuntime(28151): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:638) 08-22 23:18:06.990: E/AndroidRuntime(28151): at dalvik.system.NativeStart.main(Native Method) 08-22 23:18:06.990: E/AndroidRuntime(28151): Caused by: java.lang.NullPointerException 08-22 23:18:06.990: E/AndroidRuntime(28151): at com.example.killthemall.Game$KhogenView.<init>(Game.java:96) 08-22 23:18:06.990: E/AndroidRuntime(28151): at com.example.killthemall.Game.onCreate(Game.java:58) 08-22 23:18:06.990: E/AndroidRuntime(28151): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1049) 08-22 23:18:06.990: E/AndroidRuntime(28151): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1611) 08-22 23:18:06.990: E/AndroidRuntime(28151): ... 11 more 08-22 23:18:18.050: D/AndroidRuntime(28191): Shutting down VM 08-22 23:18:18.050: W/dalvikvm(28191): threadid=1: thread exiting with uncaught exception (group=0xb3f6f4f0) 08-22 23:18:18.050: I/Process(28191): Sending signal. PID: 28191 SIG: 9 08-22 23:18:18.050: D/AndroidRuntime(28191): procName from cmdline: com.example.killthemall 08-22 23:18:18.050: E/AndroidRuntime(28191): in writeCrashedAppName, pkgName :com.example.killthemall 08-22 23:18:18.050: D/AndroidRuntime(28191): file written successfully with content: com.example.killthemall StringBuffer : ;com.example.killthemall 08-22 23:18:18.050: E/AndroidRuntime(28191): FATAL EXCEPTION: main 08-22 23:18:18.050: E/AndroidRuntime(28191): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.killthemall/com.example.killthemall.Game}: java.lang.NullPointerException 08-22 23:18:18.050: E/AndroidRuntime(28191): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1647) 08-22 23:18:18.050: E/AndroidRuntime(28191): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1663) 08-22 23:18:18.050: E/AndroidRuntime(28191): at android.app.ActivityThread.access$1500(ActivityThread.java:117) 08-22 23:18:18.050: E/AndroidRuntime(28191): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:931) 08-22 23:18:18.050: E/AndroidRuntime(28191): at android.os.Handler.dispatchMessage(Handler.java:99) 08-22 23:18:18.050: E/AndroidRuntime(28191): at android.os.Looper.loop(Looper.java:130) 08-22 23:18:18.050: E/AndroidRuntime(28191): at android.app.ActivityThread.main(ActivityThread.java:3683) 08-22 23:18:18.050: E/AndroidRuntime(28191): at java.lang.reflect.Method.invokeNative(Native Method) 08-22 23:18:18.050: E/AndroidRuntime(28191): at java.lang.reflect.Method.invoke(Method.java:507) 08-22 23:18:18.050: E/AndroidRuntime(28191): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:880) 08-22 23:18:18.050: E/AndroidRuntime(28191): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:638) 08-22 23:18:18.050: E/AndroidRuntime(28191): at dalvik.system.NativeStart.main(Native Method) 08-22 23:18:18.050: E/AndroidRuntime(28191): Caused by: java.lang.NullPointerException 08-22 23:18:18.050: E/AndroidRuntime(28191): at com.example.killthemall.Game$KhogenView.<init>(Game.java:96) 08-22 23:18:18.050: E/AndroidRuntime(28191): at com.example.killthemall.Game.onCreate(Game.java:58) 08-22 23:18:18.050: E/AndroidRuntime(28191): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1049) 08-22 23:18:18.050: E/AndroidRuntime(28191): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1611) 08-22 23:18:18.050: E/AndroidRuntime(28191): ... 11 more

    Read the article

  • Google Webmasters Tools strange 404 errors referred from same site

    - by Out of Control
    Starting about a month ago, I noticed a sudden increase in 404 errors in Webmasters Tools for one of my sites (over 1400 errors so far). All the errors are being referred from my own site to non existent pages. The 404 error URLs are all of the same format: URL: http://www.helloneighbour.com/save/1347208508000 The number on the end appears to be a timestamp followed by 3 zeros. The referring page, in this case is : Linked from http://www.helloneighbour.com/save/cmw-insurance-insurance-burnaby When I look at the source code of that page, or I use Webmaster tools to view the page as Google sees it, I can't find any link that comes close to what is above. I built the site, and I can't find any place that might be causing these false links either. The server logs (access and error) don't show Google or anyone else trying to access these links. I've marked all these pages as fixed, and waited a couple of weeks, only to find the errors come back again over the last few days. I'm wondering if anyone else has seen anything strange like this, or if someone might have a way for me to debug, replicate this error myself.

    Read the article

  • Printing a dynamic sheet as one document

    - by Sux2Lose
    I have a spreadsheet structured as follows: Summary section at the top Detail section on the bottom Summary section summarizes the detail section which is filtered using auto filters There are ten products that all need to be printed individually, but I want the page footer to show the overall page position of all the print jobs and the total number of pages. That is probably not clear. So for example, if I print the two page Product A view it will print page 1 of 2 and 2 of 2. If I print the one page Product B it will show page 1 of 1. What I want is to print both and have Product A show Page 1 of 3, Page 2 of 3, and Product B be Page 3 of 3. Is there any way to accomplish this?

    Read the article

  • Debug Apache mod_status showing 151 requests/sec - 2.7 GB/second

    - by James Hackett
    One of my web servers went crazy this morning and showed "151 requests/sec - 2.7 GB/second - 140.7 MB/request" the normal is like "11.1 requests/sec - 65.6 kB/second - 5.9 kB/request" I don't even think that kind of through put is possible on my server. It was also listing odd symbols for the urls and the amount of data transfered for connections was off the meter 246-0 -1286402072 0/0/0 ? 0.00 -1444841118 0 -5416403825852416.0 0.00 0.00 °Rk³ 247-0 18 0/0/0 ? -13112985.76 2094967848 0 -5428200825946112.0 0.00 0.00 248-2 23437 0/0/2 _ 0.00 0 0 0.0 0.00 -5340330065920.00 74.53.23.134 web2.mydomain.com OPTIONS / HTTP/1.0 249-2 23279 0/2981898840/0 W 16673317.60 11 0 0.0 2844.06 0.00 201.144.221.245 www.mydomain.com GET /cb8ff49a2395a7b1accbbce1e4cf164f/view/256 HTTP/1.1 250-0 0 40600/3009863336/0 ? 3816369.92 910209710 0 2913775.3 -5323551899648.00 -5324315849947.28 èøÏ² Has anyone seen anything like this before and know what might be causing it? I posted the full mod_status output here http://pastie.org/916066

    Read the article

  • How to make MAMP PRO / XAMPP secure enough to serve as production webserver? Is it possible?

    - by Andrei
    Hi, my task is to setup a MAMP webserver for our website in the easiest way so it can be managed by my colleagues without experience in server administration. MAMP PRO is an excellent solution, but some guys don't suggest to use it for serving external requests. Could you explain why it is bad (in details if possible) and how to make it secure enough to be a full-scale and not-only-local webserver? Is there a better solution? Update There is a discussion on the MAMP website. XAMPP developers say that one can make their product secure: The default configuration is not good from a securtiy point of view and it's not secure enough for a production environment - please don't use XAMPP in such environment. Since LAMPP 0.9.5 you can make your XAMPP installation secure by calling »/opt/lampp/lampp security«. Could you comment it?

    Read the article

  • Microsoft ReportViewer SetParameters continuous refresh issue

    - by Ilya Verbitskiy
    Originally posted on: http://geekswithblogs.net/ilich/archive/2013/10/16/microsoft-reportviewer-setparameters-continuous-refresh-issue.aspxI am a big fun of using ASP.NET MVC for building web-applications. It allows us to create simple, robust and testable solutions. However, .NET world is not perfect. There is tons of code written in ASP.NET web-forms. You cannot simply ignore it, even if you want to. Sometimes ASP.NET web-forms controls bring us non-obvious issues. The good example is Microsoft ReportViewer control. I have an example for you. 1: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> 2: <%@ Register Assembly="Microsoft.ReportViewer.WebForms, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91" Namespace="Microsoft.Reporting.WebForms" TagPrefix="rsweb" %> 3:   4: <!DOCTYPE html> 5:   6: <html xmlns="http://www.w3.org/1999/xhtml"> 7: <head runat="server"> 8: <title>Report Viewer Continiuse Resfresh Issue Example</title> 9: </head> 10: <body> 11: <form id="form1" runat="server"> 12: <div> 13: <asp:ScriptManager runat="server"></asp:ScriptManager> 14: <rsweb:ReportViewer ID="_reportViewer" runat="server" Width="100%" Height="100%"></rsweb:ReportViewer> 15: </div> 16: </form> 17: </body> 18: </html>   The back-end code is simple as well. I want to show a report with some parameters to a user. 1: protected void Page_Load(object sender, EventArgs e) 2: { 3: _reportViewer.ProcessingMode = ProcessingMode.Remote; 4: _reportViewer.ShowParameterPrompts = false; 5:   6: var serverReport = _reportViewer.ServerReport; 7: serverReport.ReportServerUrl = new Uri("http://localhost/ReportServer_SQLEXPRESS"); 8: serverReport.ReportPath = "/Reports/TestReport"; 9:   10: var reportParameter1 = new ReportParameter("Parameter1"); 11: reportParameter1.Values.Add("Hello World!"); 12:   13: var reportParameter2 = new ReportParameter("Parameter2"); 14: reportParameter2.Values.Add("10/16/2013"); 15:   16: var reportParameter3 = new ReportParameter("Parameter3"); 17: reportParameter3.Values.Add("10"); 18:   19: serverReport.SetParameters(new[] { reportParameter1, reportParameter2, reportParameter3 }); 20: }   I set ShowParametersPrompts to false because I do not want user to refine the search. It looks good until you run the report. The report will refresh itself all the time. The problem caused by ServerReport.SetParameters method in Page_Load. The method cause ReportViewer control to execute the report on the NEXT post back. That is why the page has continuous post-backs. The fix is very simple: do nothing if Page_Load method executed during post-back. 1: protected void Page_Load(object sender, EventArgs e) 2: { 3: if (IsPostBack) 4: { 5: return; 6: } 7:   8: _reportViewer.ProcessingMode = ProcessingMode.Remote; 9: _reportViewer.ShowParameterPrompts = false; 10:   11: var serverReport = _reportViewer.ServerReport; 12: serverReport.ReportServerUrl = new Uri("http://localhost/ReportServer_SQLEXPRESS"); 13: serverReport.ReportPath = "/Reports/TestReport"; 14:   15: var reportParameter1 = new ReportParameter("Parameter1"); 16: reportParameter1.Values.Add("Hello World!"); 17:   18: var reportParameter2 = new ReportParameter("Parameter2"); 19: reportParameter2.Values.Add("10/16/2013"); 20:   21: var reportParameter3 = new ReportParameter("Parameter3"); 22: reportParameter3.Values.Add("10"); 23:   24: serverReport.SetParameters(new[] { reportParameter1, reportParameter2, reportParameter3 }); 25: } You can download sample code from GitHub - https://github.com/ilich/Examples/tree/master/ReportViewerContinuousRefresh

    Read the article

  • Silverlight Firestarter 2010 Keynote with Scott Guthrie: Silverlight has a bright future!

    - by Jim Duffy
    If you didn’t get chance to watch the Silverlight Firestart event live during the webcast it is available online to view now. If you’re a Silverlight developer or perhaps a shop actively planning on developing a Silverlight application then you’re going to want to watch this video. The Silverlight 5 feature set unveiled during the keynote is fantastic! I particularly like Scott’s approach and comments on the future of Silverlight. I appreciated his open and direct acknowledgment that there has “been a lot of angst on this topic in the last few weeks” and he took the bull by the horns and stated “Let me say up front that there is a Silverlight future, and we think it’s going to be a very bright one.” That comment drew applause from the local audience and in our local viewing event held in Raleigh, NC. Of course my first question was when can we get our grubby little hands on Silverlight 5 and start working with it. The answer unfortunately wasn’t “right now” but they did announce the Silverlight 5 beta will be available in the first half of 2011. Of course the following is pure speculation on my part but I wouldn’t be surprised if they made it available at a certain event in April 2011. Additional information about the Silverlight 5 announcement is available on Scott’s blog. Have a day.

    Read the article

  • Notes on Oracle BPM PS6 Adaptive Case Management

    - by gcolman
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} I have recently been looking at the  latest release of the BPM Case Management feature in the Oracle BPM PS6 release. I had put together some notes to help me gain a better understanding of the context of the PS6 BPM Case Management. Hopefully, this along with the other resources will enable you to gain a clear picture of the flexibility of this feature. Oracle BPM PS6 release includes Case Management capability. This initial release aims to provide: Case Management Framework Integration of Case Management with BPM & SOA suite It is best to regard the current PS6 case management feature as a case management framework. The framework provides the building blocks for creating a case management system that is fully integrated into Oracle BPM suite. As of the current PS6 release, no UI tooling exists to help manage cases or the case lifecycle. Mark Foster has written a good blog which outlines Case Management within PS6 in the following link. I wanted to provide more context on Case Management from my perspective in this blog. PS6 Case Management - High level View BPM PS6 includes “Case” as a first class component in a SOA Suite composite. The Case components (added to the SOA Composite) are created when a BPM process is assigned to a case in JDveloper. The SOA Case component is defined and configured within JDevloper, which allows us to specify the case data structures and metadata such as stakeholders, outcomes, milestones, document stores etc. "Activities" are associated with a case, and become available to be executed via the case apis. Activities are BPM processes, Human Activities or Java call outs. The PS6 release includes some additional database tables to store the case metadata and case instance data (data object, comments, etc…). These new tables are created within the SOA_INFRA schema and the documents associated with that case into a document repository that is configured with the case. One of the main features of Case Management is the control of the case logic through case events and case business rules. A PS6 Case has an associated business rule component, which can be configured to control the availability and execution of activities within the case. The business rules component is able to act upon events that the PS6 Case Management framework generates during the lifecycle of that case. Events are fired during the lifetime of the case (e.g. Case created, activity started, activity ended, note added, document uploaded.) Internal Case state The internal state of a case is represented by the diagram below. This shows the internal states and the transition paths for a Case from one state to the next Each transition in state will create an event that can be enacted upon via the Case rules engine. The internal case state lifecycle is defined as follows Defining a case A Case is created and defined as a component of a JDeveloper BPM project. When you create a Case as part of a BPM project, JDeveloper, creates the following components within the SCA composite: Case component Case component interfaces (WSDL etc) Case Rules component (Oracle Business Rules) Adds the Case Component and Case Rules Component to the BPM SOA composite Case Configuration The following section gives a high level overview of the items that can be configured for a BPM Case. Case Activities A Case is associated with a set of activities that are to be performed as part of that Case. Case activities can be: SOA Human Tasks BPM processes Custom Task (Java Class) Case activities are created from pre-existing BPM process or human tasks, which, once defined, can be configured additionally as Case activities in JDeveloper and made available within the lifecycle of a case. I've described the following configurable components of a case (very!) briefly as: Milestones Milestones are (optional) user defined logical milestones that can be achieved within a case. No activities are associates with a milestone, but milestone attainment can be programmatically set and events raised when milestones are reached Outcomes User defined status of a completed case. An event is fired when an outcome is attained. Case Data Defines the data that will be stored with a case XML schemas define the data that is stored with the case. Case Documents Defines the location of documents that are attached to a case (e.g. WebCenter Content) User Defined Events Optional user defined events that can be fired or captured to drive case processing rules Stakeholders Defines the actors who can participate in the case (roles, users, groups) Defines permissions for individual case permissions (read case, create document etc…) Business Rules Business rules are the main component controlling the flow of a Case Each case has an associated business ruleset Rules are fired on receiving Case events (or User defined events) Life cycle events Milestone events Activity events Data events Document events Comment events User event Managing the Case Managing the lifecycle of a case is achieved in two ways: Managing case logic with Business Rules Managing the case lifecycle via the Case APIs. A BPM Case can be viewed as a set of case data & documents along with the activities that can be performed within a case and also the case lifecycle state expressed as milestones and internal lifecycle state. The management of the case life is achieved though both the configuration of business rules and the “manual” interaction with a case instance through the Case APIs. Business Rules and Case Events A key component within the Case management framework is the event model. The BPM Case Management solution internally utilizes Oracle EDN (Event Delivery Network) to publish and subscribe to events generated by the Case framework. Events are generated by the Case framework on each of the processes and stages that a case instance will travel on its lifetime. The following case events are part of the BPM Case: Life cycle events Milestone events Activity events Data events Document events Comment events User event The Case business rules are configured to listen for these events, and business logic can be coded into the Case rules component to enact upon an event being received. Case API & Interaction Along with the business rules component, Cases can be managed via the Case API interfaces. These interfaces allow for the building of custom applications to integrate into case management framework. The API’s allow for updating case comments & documents, executing case activities, updating milestones etc. As there is no in built case management UI functions within the PS6 release, Cases need to be managed via a custom built UI, interacting with selected case instances, launching case activities, closing cases etc. (There is expected to be a UI component within subsequent releases) Logical Case Flow The diagram below is intended to depict a logical view of the case steps for a typical case. A UI or other service calls the Case interface to create a Case instance The case instance is created & database data inserted A lifecycle event is raised indicating a case activity (created) event The case business rules capture the event and decide on an action to take Additionally other parties can subscribe to Case events via EDN The business rules may handle the event, e.g. configured to execute a case activity on case creation event The BPM/Human Workflow/Custom activity is executed A case activity event is raised on the execute activity A case work UI or business service can inspect the case instance and call other actions to progress that case, such as: Execute activity Add Note Add document Add case data Update Milestone Raise user defined event Suspend case Resume case Close Case Summary Having had a little time to play around with the APIs and the case configuration, I really like the flexibility and power of combining Oracle Business Rules and the BPM Case Management event model. Creating something this flexible and powerful without BPM Case Management would take a lot of time and effort. This is hopefully going to save my customers a lot of time and effort! I may make amendments to this post as my understanding of Case Management increases! Take a look at the following links for official documentation etc. http://docs.oracle.com/cd/E28280_01/doc.1111/e15176/case_mgmt_bpmpd.htm https://blogs.oracle.com/bpm/entry/just_in_case Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • PO Communication in PDF

    - by Robert Story
    Upcoming WebcastsDate: March 29, 2010 Time: 2 pm London, 9:00 am EDT, 6:00 am PDT, 13:00 GMT Click here to register for this sessionDate: March 29, 2010 Time: 9 am London, 4:00 am EDT, 1:00 am PDT, 8:00 GMT Click here to register for this session Product Family: ProcurementSummary This one-hour session is recommended for technical and functional users who would like to know about the PO Communication functionality in procurement. Topics will include: Introduction to PO PDF communication - 11.5.10 Key ConceptsPrerequisites, Scope Overview of PDF document generation PDF solution overviewTechnical Overview of PDF generation Setup steps Triggering Points of PDF generation PO Output for communication - Concurrent programEnter PO form: View DocIsupplier portal/Contracts preview Enhancements PDF Generation in Custom LayoutsAttachments in fax communicationR12 Communication Nontext Attachments through Email Customizing templates Advantages of PDF communication Troubleshooting (Tips) A short, live demonstration (only if applicable) and question and answer period will be included........ ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • nginx terminates connection after 65k bytes

    - by David Wolever
    I've got nginx configured as a front-end to a Python application running under gunicorn, but nginx is terminating connections after about 65k of data have been sent. For example, I've got a view which looks like this: def debug_big_file(request): return HttpResponse("x" * 500000) But when I access that URL through nginx, I only get 65283 bytes: $ curl https://example.com/debug/big-file | wc … curl: (18) transfer closed with outstanding read data remaining 0 1 65283 Note that everything works as expected when accessing gunicorn directly: $ curl http://localhost:1234/debug/big-file | wc … 0 1 500000 The relevant nginx config: location / { proxy_pass http://localhost:1234/; proxy_redirect off; proxy_headers_hash_bucket_size 96; } And nginx version 1.7.0 Some other facts: The number of bytes is consistent from request to request, but it varies based on the content (I first noticed it with a large PNG file, which was cut off after 65,372 bytes, not 65,283) 110k bytes are sent correctly (ie, "x" * 110000 returns all 110,000 bytes), but 120k bytes are not tcpdump suggests that nginx is sending a RST packet to gunicorn:

    Read the article

  • Can I rebuild degraded RAID1 on a SAS 5 i controller with a larger mirror drive size?

    - by Kenny
    A drive failed on a Dell SAS 5i controller - see controller bios screengrab: http://imagebin.ca/view/SkZbszA.html The primary is a 160GB 10k sata drive. I added a 250GB 7k rpm drive in the hope that the array would rebuild onto this drive. This did not happen. (assuming that the controller would just operate at the speed of the slowest drive) The controller could see the new drive, but it didn't automatically rebuild the raid1 onto this drive. (my assumption is that it did not do this rebuild as the drive sizes are different). There was an option to add the new drive to the existing raid1 array - but when I tried this a message appeared stating that all data on the array would be lost. (I didn't get a screenshot of this message, I will later) Should the SAS 5i allow me to rebuild the array onto a larger drive? Is the option to add the drive to the array the right way to go? Many thanks!

    Read the article

  • How do I decrypt a password-protected PDF on OSX?

    - by Brant Bobby
    I have a PDF that requires a password to view. I know what the password is. I frequently open this PDF to print it, and find entering the password each time incredibly annoying. How can I remove the password from the PDF? Since I need to print it, simply taking a screenshot isn't a good solution. I tried printing the file to a PDF, but Preview disables the "Save as PDF..." option in the print dialog.

    Read the article

  • Caching NHibernate Named Queries

    - by TStewartDev
    I recently started a new job and one of my first tasks was to implement a "popular products" design. The parameters were that it be done with NHibernate and be cached for 24 hours at a time because the query will be pretty taxing and the results do not need to be constantly up to date. This ended up being tougher than it sounds. The database schema meant a minimum of four joins with filtering and ordering criteria. I decided to use a stored procedure rather than letting NHibernate create the SQL for me. Here is a summary of what I learned (even if I didn't ultimately use all of it): You can't, at the time of this writing, use Fluent NHibernate to configure SQL named queries or imports You can return persistent entities from a stored procedure and there are a couple ways to do that You can populate POCOs using the results of a stored procedure, but it isn't quite as obvious You can reuse your named query result mapping other places (avoid duplication) Caching your query results is not at all obvious Testing to see if your cache is working is a pain NHibernate does a lot of things right. Having unified, up-to-date, comprehensive, and easy-to-find documentation is not one of them. By the way, if you're new to this, I'll use the terms "named query" and "stored procedure" (from NHibernate's perspective) fairly interchangeably. Technically, a named query can execute any SQL, not just a stored procedure, and a stored procedure doesn't have to be executed from a named query, but for reusability, it seems to me like the best practice. If you're here, chances are good you're looking for answers to a similar problem. You don't want to read about the path, you just want the result. So, here's how to get this thing going. The Stored Procedure NHibernate has some guidelines when using stored procedures. For Microsoft SQL Server, you have to return a result set. The scalar value that the stored procedure returns is ignored as are any result sets after the first. Other than that, it's nothing special. CREATE PROCEDURE GetPopularProducts @StartDate DATETIME, @MaxResults INT AS BEGIN SELECT [ProductId], [ProductName], [ImageUrl] FROM SomeTableWithJoinsEtc END The Result Class - PopularProduct You have two options to transport your query results to your view (or wherever is the final destination): you can populate an existing mapped entity class in your model, or you can create a new entity class. If you go with the existing model, the advantage is that the query will act as a loader and you'll get full proxied access to the domain model. However, this can be a disadvantage if you require access to the related entities that aren't loaded by your results. For example, my PopularProduct has image references. Unless I tie them into the query (thus making it even more complicated and expensive to run), they'll have to be loaded on access, requiring more trips to the database. Since we're trying to avoid trips to the database by using a second-level cache, we should use the second option, which is to create a separate entity for results. This approach is (I believe) in the spirit of the Command-Query Separation principle, and it allows us to flatten our data and optimize our report-generation process from data source to view. public class PopularProduct { public virtual int ProductId { get; set; } public virtual string ProductName { get; set; } public virtual string ImageUrl { get; set; } } The NHibernate Mappings (hbm) Next up, we need to let NHibernate know about the query and where the results will go. Below is the markup for the PopularProduct class. Notice that I'm using the <resultset> element and that it has a name attribute. The name allows us to drop this into our query map and any others, giving us reusability. Also notice the <import> element which lets NHibernate know about our entity class. <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <import class="PopularProduct, Infrastructure.NHibernate, Version=1.0.0.0"/> <resultset name="PopularProductResultSet"> <return-scalar column="ProductId" type="System.Int32"/> <return-scalar column="ProductName" type="System.String"/> <return-scalar column="ImageUrl" type="System.String"/> </resultset> </hibernate-mapping>  And now the PopularProductsMap: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <sql-query name="GetPopularProducts" resultset-ref="PopularProductResultSet" cacheable="true" cache-mode="normal"> <query-param name="StartDate" type="System.DateTime" /> <query-param name="MaxResults" type="System.Int32" /> exec GetPopularProducts @StartDate = :StartDate, @MaxResults = :MaxResults </sql-query> </hibernate-mapping>  The two most important things to notice here are the resultset-ref attribute, which links in our resultset mapping, and the cacheable attribute. The Query Class – PopularProductsQuery So far, this has been fairly obvious if you're familiar with NHibernate. This next part, maybe not so much. You can implement your query however you want to; for me, I wanted a self-encapsulated Query class, so here's what it looks like: public class PopularProductsQuery : IPopularProductsQuery { private static readonly IResultTransformer ResultTransformer; private readonly ISessionBuilder _sessionBuilder;   static PopularProductsQuery() { ResultTransformer = Transformers.AliasToBean<PopularProduct>(); }   public PopularProductsQuery(ISessionBuilder sessionBuilder) { _sessionBuilder = sessionBuilder; }   public IList<PopularProduct> GetPopularProducts(DateTime startDate, int maxResults) { var session = _sessionBuilder.GetSession(); var popularProducts = session .GetNamedQuery("GetPopularProducts") .SetCacheable(true) .SetCacheRegion("PopularProductsCacheRegion") .SetCacheMode(CacheMode.Normal) .SetReadOnly(true) .SetResultTransformer(ResultTransformer) .SetParameter("StartDate", startDate.Date) .SetParameter("MaxResults", maxResults) .List<PopularProduct>();   return popularProducts; } }  Okay, so let's look at each line of the query execution. The first, GetNamedQuery, matches up with our NHibernate mapping for the sql-query. Next, we set it as cacheable (this is probably redundant since our mapping also specified it, but it can't hurt, right?). Then we set the cache region which we'll get to in the next section. Set the cache mode (optional, I believe), and my cache is read-only, so I set that as well. The result transformer is very important. This tells NHibernate how to transform your query results into a non-persistent entity. You can see I've defined ResultTransformer in the static constructor using the AliasToBean transformer. The name is obviously leftover from Java/Hibernate. Finally, set your parameters and then call a result method which will execute the query. Because this is set to cached, you execute this statement every time you run the query and NHibernate will know based on your parameters whether to use its cached version or a fresh version. The Configuration – hibernate.cfg.xml and Web.config You need to explicitly enable second-level caching in your hibernate configuration: <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> [...] <property name="dialect">NHibernate.Dialect.MsSql2005Dialect</property> <property name="cache.provider_class">NHibernate.Caches.SysCache.SysCacheProvider,NHibernate.Caches.SysCache</property> <property name="cache.use_query_cache">true</property> <property name="cache.use_second_level_cache">true</property> [...] </session-factory> </hibernate-configuration> Both properties "use_query_cache" and "use_second_level_cache" are necessary. As this is for a web deployement, we're using SysCache which relies on ASP.NET's caching. Be aware of this if you're not deploying to the web! You'll have to use a different cache provider. We also need to tell our cache provider (in this cache, SysCache) about our caching region: <syscache> <cache region="PopularProductsCacheRegion" expiration="86400" priority="5" /> </syscache> Here I've set the cache to be valid for 24 hours. This XML snippet goes in your Web.config (or in a separate file referenced by Web.config, which helps keep things tidy). The Payoff That should be it! At this point, your queries should run once against the database for a given set of parameters and then use the cache thereafter until it expires. You can, of course, adjust settings to work in your particular environment. Testing Testing your application to ensure it is using the cache is a pain, but if you're like me, you want to know that it's actually working. It's a bit involved, though, so I'll create a separate post for it if comments indicate there is interest.

    Read the article

  • Unable to apt-get upgrade in ubuntu 11.10

    - by blackhole
    These are the errors shows by different client Update Manager: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 968, in simulate trans.unauthenticated = self._simulate_helper(trans) File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1092, in _simulate_helper return depends, self._cache.required_download, \ File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 235, in required_download pm.get_archives(fetcher, self._list, self._records) SystemError: E:Method has died unexpectedly!, E:Sub-process returned an error code (100), E:Method /usr/lib/apt/methods/ did not start correctly Synaptic package Manager E: Method has died unexpectedly! E: Sub-process returned an error code (100) E: Method /usr/lib/apt/methods/ did not start correctly E: Unable to lock the download directory Command: sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: libfreetype6 libfreetype6-dev 2 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Failed to exec method /usr/lib/apt/methods/ E: Method has died unexpectedly! E: Sub-process returned an error code (100) E: Method /usr/lib/apt/methods/ did not start correctly Can anyone one tell me how to resolve these issues ? I have no volatile packages or anything so i am even posting the preview of my sources.list file. # deb cdrom:[Ubuntu 10.10 _Maverick Meerkat_ - Release i386 (20101007)]/ maverick main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://in.archive.ubuntu.com/ubuntu/ oneiric main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://in.archive.ubuntu.com/ubuntu/ oneiric-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://in.archive.ubuntu.com/ubuntu/ oneiric universe deb http://in.archive.ubuntu.com/ubuntu/ oneiric-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://in.archive.ubuntu.com/ubuntu/ oneiric multiverse deb http://in.archive.ubuntu.com/ubuntu/ oneiric-updates multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb http://in.archive.ubuntu.com/ubuntu/ maverick-backports main restricted universe multiverse # deb-src http://in.archive.ubuntu.com/ubuntu/ maverick-backports main restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu oneiric partner deb-src http://archive.canonical.com/ubuntu oneiric partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu oneiric main deb-src http://extras.ubuntu.com/ubuntu oneiric main deb http://in.archive.ubuntu.com/ubuntu/ oneiric-security main restricted deb http://in.archive.ubuntu.com/ubuntu/ oneiric-security universe deb http://in.archive.ubuntu.com/ubuntu/ oneiric-security multiverse # deb http://archive.canonical.com/ lucid partner Here is the preview of my sources.list file

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 27 (sys.dm_db_file_space_usage)

    - by Tamarick Hill
    The sys.dm_db_file_space usage DMV returns information about database file space usage.  This DMV was enhanced for the 2012 version to include 3 additional columns. Let’s query this DMV against our AdventureWorks2012 database and view the results. SELECT * FROM sys.dm_db_file_space_usage The column returned from this DMV are really self-explanatory, but I will give you a description, paraphrased from books online, below. The first three columns returned from this DMV represent the Database, File, and Filegroup for the current database context that executed the DMV query. The next column is the total_page_count which represents the total number of pages in the file. The allocated_extent_page_count represents the total number of pages in all extents that have been allocated. The unallocated_extent_page_count represents the number of pages in the unallocated extents within the file. The version_store_reserved_page_count column represents the number of pages that are allocated to the version store. The user_object_reserved_page_count represents the number of pages allocated for user objects. The internal_object_reserved_page_count represents the number of pages allocated for internal objects.  Lastly is the mixed_extent_page_count which represents the total number of pages that are part of mixed extents. This is a great DMV for retrieving usage space information from your database files. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms174412.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • DIVIDE vs division operator in #dax

    - by Marco Russo (SQLBI)
    Alberto Ferrari wrote an interesting article about DIVIDE performance in DAX. This new function has been introduced in SQL Server Analysis Services 2012 SP1, so it is available also in Excel 2013 (which still doesn’t have other features/fixes introduced by following Cumulative Updates…). The idea that instead of writing: IF ( Sales[Quantity] <> 0, Sales[Amount] / Sales[Quantity], BLANK () ) you can write: DIVIDE ( Sales[Amount], Sales[Quantity] ) There is a third optional argument in DIVIDE that defines the result in case the denominator (second argument) is zero, and by default its value is BLANK, so I omitted the third argument in my example. Using DIVIDE is very important, especially when you use a measure in MDX (for example in an Excel PivotTable) because it raise the chance that the non empty evaluation for the result is evaluated in bulk mode instead of cell-by-cell. However, from a DAX point of view, you might find it’s better to use the standard division operator removing the IF statement. I suggest you to read Alberto’s article, because you will find that an expression applying a filter using FILTER is faster than using CALCULATE, which is against any rule of thumb you might have read until now! Again, this is not always true, and depends on many conditions – trying to simplify, we might say that for a simple calculation, the query plan generated by FILTER could be more efficient – but, as usual, it depends, and 90% of the times using FILTER instead of CALCULATE produces slower performance. Do not take anything for granted, and always check the query plan when performance are your first issue!

    Read the article

  • FRIDAY SPOTLIGHT: Oracle Linux and Virtualization Showcase @ Oracle OpenWorld

    - by Zeynep Koch
    Oracle Linux and Virtualization Showcase “aka.Pavilion" at Oracle Openworld will be amazing this year. You can find us in a spacious area in Moscone South (Booth #611), featuring many of our key partners. New this year in the Showcase, you will also find Oracle demopods showcasing Oracle Linux and Oracle Virtualization. In addition, we are also featuring OpenStack. A lot of exciting technologies and solutions in one stop! Oracle Linux and Virtualization partners will be on the floor with their latest integrations and solutions to help you better accelerate your infrastructure investments. Come by the Showcase to network, win some prizes and walk away with: Insights and real world implementation examples from participating ISV, IHV and SI partners Deeper knowledge on the latest developments of Oracle Linux and Oracle Virtualization and the Oracle OpenStack integrations Broader view of how Oracle and Partners are implementing OpenStack Whether you are modernizing your IT or planning an OpenStack deployment, join us in the Oracle Linux and Virtualization Showcase and our experts will help you visualize your future, simplify your IT life and realize further profitability for your business. Starting next week here on the Linux and Virtualization blogs, we’ll go into detail about the partners that you can visit in the Oracle Linux and Virtualization Showcase. In the meantime, don't forget to mark Moscone South, Booth: 611 as a place to visit this year.Hope to see you in our Oracle Linux and Virtualization Showcase!

    Read the article

  • FRIDAY SPOTLIGHT: Oracle Linux and Virtualization Showcase @ Oracle OpenWorld

    - by Zeynep Koch
    Oracle Linux and Virtualization Showcase “aka.Pavilion" at Oracle Openworld will be amazing this year. You can find us in a spacious area in Moscone South (Booth #611), featuring many of our key partners. New this year in the Showcase, you will also find Oracle demopods showcasing Oracle Linux and Oracle Virtualization. In addition, we are also featuring OpenStack. A lot of exciting technologies and solutions in one stop! Oracle Linux and Virtualization partners will be on the floor with their latest integrations and solutions to help you better accelerate your infrastructure investments. Come by the Showcase to network, win some prizes and walk away with: Insights and real world implementation examples from participating ISV, IHV and SI partners Deeper knowledge on the latest developments of Oracle Linux and Oracle Virtualization and the Oracle OpenStack integrations Broader view of how Oracle and Partners are implementing OpenStack Whether you are modernizing your IT or planning an OpenStack deployment, join us in the Oracle Linux and Virtualization Showcase and our experts will help you visualize your future, simplify your IT life and realize further profitability for your business. Starting next week here on the Linux and Virtualization blogs, we’ll go into detail about the partners that you can visit in the Oracle Linux and Virtualization Showcase. In the meantime, don't forget to mark Moscone South, Booth: 611 as a place to visit this year.Hope to see you in our Oracle Linux and Virtualization Showcase!

    Read the article

  • Create an Alias Directory inside a Virtual Host

    - by Praveen Kumar
    First, let me say, I asked this question in StackOverflow, and thought I could get more replies here. I checked here, here, here, here, here, and here before asking this question. I guess my search skills are weak. I am using the WampServer version 2.2e. I have a need like, I need a virtual path inside a virtual host. Let me say the two hosts that I have. Primary Virtual Host (Localhost) NameVirtualHost *:80 <VirtualHost *:80> ServerName localhost DocumentRoot "C:/Wamp/www" </VirtualHost> My Apps Virtual Hosts <VirtualHost *:80> ServerName apps.ptrl DocumentRoot "C:/Wamp/vhosts/ptrl/apps" ErrorLog "logs/apps-ptrl-error.log" CustomLog "logs/apps-ptrl-access.log" common <Directory "C:/Wamp/vhosts/ptrl/apps"> allow from all order allow,deny AllowOverride All </Directory> DirectoryIndex index.html index.htm index.php </VirtualHost> My Blog Virtual Host <VirtualHost *:80> ServerName blog.praveen-kumar.ptrl DocumentRoot "C:/Wamp/vhosts/ptrl/praveen-kumar/blog" ErrorLog "logs/praveen-kumar-ptrl-error.log" CustomLog "logs/praveen-kumar-ptrl-access.log" common <Directory "C:/Wamp/vhosts/ptrl/praveen-kumar/blog"> allow from all order allow,deny AllowOverride All </Directory> DirectoryIndex index.html index.htm index.php </VirtualHost> My requirement now is to have http://apps.ptrl/blog/ and http://blog.praveen-kumar.ptrl/ should be the same directory. One thing I thought of is, moving the blog folder inside the apps folder, but it is connected with Git and other stuffs are there, so it is not possible to move the folder. So, I thought of creating an alias to the VirtualHost in this way: <VirtualHost *:80> ServerName apps.ptrl DocumentRoot "C:/Wamp/vhosts/ptrl/apps" ErrorLog "logs/apps-ptrl-error.log" CustomLog "logs/apps-ptrl-access.log" common <Directory "C:/Wamp/vhosts/ptrl/apps"> allow from all order allow,deny AllowOverride All </Directory> DirectoryIndex index.html index.htm index.php # The alias to the blog! Alias /blog "C:/Wamp/vhosts/ptrl/praveen-kumar/blog" <Directory "C:/Wamp/vhosts/ptrl/praveen-kumar/blog"> allow from all order allow,deny AllowOverride All </Directory> </VirtualHost> But when I tried to access http://apps.ptrl/blog, I am getting an Error 403 Forbidden page. Am I doing the right thing? If you need to look at the access log, and error log, they are here: # Access Log 127.0.0.1 - - [14/Oct/2012:09:53:11 +0530] "GET /blog HTTP/1.1" 403 206 127.0.0.1 - - [14/Oct/2012:09:53:11 +0530] "GET /favicon.ico HTTP/1.1" 404 209 127.0.0.1 - - [14/Oct/2012:09:53:53 +0530] "GET / HTTP/1.1" 200 6935 127.0.0.1 - - [14/Oct/2012:09:53:53 +0530] "GET /app/blog/thumb.png HTTP/1.1" 404 216 # Error Log [Sun Oct 14 09:53:11 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/Wamp/vhosts/ptrl/praveen-kumar/blog [Sun Oct 14 09:53:11 2012] [error] [client 127.0.0.1] File does not exist: C:/Wamp/vhosts/ptrl/apps/favicon.ico [Sun Oct 14 09:53:53 2012] [error] [client 127.0.0.1] File does not exist: C:/Wamp/vhosts/ptrl/apps/app/blog, referer: http://apps.ptrl/ Waiting eagerly for some help. I am ready to provide more info, if needed. Update #1: Changed VirtualHosts: <VirtualHost *:80> ServerName apps.ptrl DocumentRoot "C:/Wamp/vhosts/ptrl/apps" ErrorLog "logs/apps-ptrl-error.log" CustomLog "logs/apps-ptrl-access.log" common # The alias to the blog! Alias /blog "C:/Wamp/vhosts/ptrl/praveen-kumar/blog" <Directory "C:/Wamp/vhosts/ptrl/praveen-kumar/blog"> allow from all order allow,deny AllowOverride All </Directory> <Directory "C:/Wamp/vhosts/ptrl/apps"> allow from all order allow,deny AllowOverride All </Directory> DirectoryIndex index.html index.htm index.php </VirtualHost> The issue now: I am able to access the site. The physical links are working now. i.e., I am able to open http://apps.ptrl/blog/index.php but not http://apps.ptrl/blog/view-1.ptf, which gets translated to http://apps.ptrl/blog/index.php?page=view&id=1. Any solutions?

    Read the article

< Previous Page | 653 654 655 656 657 658 659 660 661 662 663 664  | Next Page >