Search Results

Search found 24403 results on 977 pages for 'matt case'.

Page 419/977 | < Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >

  • Generate webservice proxy using oracle ant tasks

    - by adrian.muraru
    Proxy generation tends to be very slow when done using jdeveloper wizard and even more this time increases when jdeveloper is started over a remote desktop connection. So here's step-by-step howto that can be used to generate webservice proxy from your *nix shell Create a dir in your scratch area : e.g. /tmp/<username>/genproxy Get build.xml file attached, save it in the dir above and change the properties defined in it to match your ws endpoint. More specifically you need to edit: proxy.wsdl - the path (either locally or URL) where WSDL file can be accessed from proxy.handler - the handler class proxy.package - the class package where the proxy will be generated Start a new shell session (out of the ADE view if you're using one) and set the environment needed for proxy generation using ant and Oracle WebServicesAssembler genProxy [1] $ setenv ORACLE_HOME /opt/jdev_local/10.1.3/ $ setenv PATH $ORACLE_HOME/ant/bin:$PATH Note that the above env setup is needed even if you already have ORACLE_HOME set and ant utiliy available in your PATH. That way you'll be sure the proxy will be generated using the same libraries your jdeveloper is using in its wizard Generate proxy $ cd /tmp/<username>/genproxy $ ant genproxy And voila, the proxy files should be available in ./src directory. Notes: [1] More information about genProxy can be found at : http://download.oracle.com/docs/cd/B32110_01/web.1013/b28974/wsassemble.htm#CHDJJIEI [2] In my case this method is much faster then using the jdeveloper wizard (15secs compared to 25minutes) [3] There is one minor drawback though, the jdeveloper .proxy configuration file is not generated. -Adrian

    Read the article

  • Git - post-receive hook with git pull "Failed to find a valid git directory"

    - by ludicco
    It's very weird but when setting a git repository and creating a post-receive hook with: echo "--initializing hook--" cd ~/websites/testing echo "--prepare update--" git pull echo "--update completed--" the hook runs indeed, but it never manage to run git pull properly: 6bfa32c..71c3d2a master -> master --initializing hook-- --prepare update-- fatal: Not a git repository: '.' Failed to find a valid git directory. --update completed-- so I'm asking myself now, how it's possible to make the hook update the clone with post-receive? in this case the user running the processes is the same, and its everything inside the user folder so I really don't understand...because if if I go manually into cd ~/websites/testing git pull it works without any problem... any help on that would be pretty much appreciated Thanks a lot

    Read the article

  • ESXi 5 network performance is slow

    - by R D
    We just did a fresh install of ESXi 5 on a host that was running ESX 4 before. Nothing has changed hardware wise. After the upgrade network performance is much slower. Even copying a big file from one VM to another VM within same virtual switch is slower compared to other hosts that are running ESX 4. Network cards are auto-negotiating at 1Gbps as were on ESX 4 prior to upgrade. All settings are default and I haven't played with Advanced Settings at all. Before opening a case with vmware, wanted to know if I am missing something or if others have experienced similar issues and found a fix?

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • IASA South East Florida Chapter Meeting Recap - June 2011

    - by Sam Abraham
    Erik Russell and Giles Marino were our speakers for the June 2011 IASA South East Florida Chapter meeting.    Attendees filled all available seats at the Microsoft office conference room where the event was held. This highlights the high interest in Enterprise Architecture as a career track and chartered project role. Also in attendance were our Board of Directors and Alex Funkhouser, President, Sherlock Technology.   Rainer Habermann, Chapter President, kicked off the meeting by introducing our speakers and Board of Directors.   Alex Funkhouser, President of South Florida’s staffing firm Sherlock Technology spoke briefly about available Software Architect positions in the area. Alex also congratulated and presented this week’s Sherlock Raffle winner with $500 in cash.   Our speakers Giles and Erik then proceeded with their talk. Erik presented a business case in the government sector where Enterprise Architecture helped a government entity cut costs and streamline its various business operations. Technologies leveraged in Erik’s demonstrated project were Java-based.   Giles then followed with a thorough demonstration of the Architecture patterns he used to migrate a complete backend system for an insurance company to the .Net Platform.   Audience was very engaged with our speakers as evidenced by the large number of follow-up questions asked at the end of the talk.   We greatly enjoyed Giles and Erik’s talk and look forward to having them share with us more of their adventures as Enterprise Architects in the near future.   Below are some photos of the event.   Sam Abraham Secretary- IASA South East Florida Chapter. http://www.iasaglobal.org/iasa/South_East_Florida.asp Chapter President - Rainer Habermann kicks off our meeting.   Sherlock Technology President Alex Funkhouser holding Sherlock's weekly cash prize. Alex shares available Software Architect opportunities with our members Erik Russell addressing our membership Giles Marino sharing his architecture experience in the insurance industry In this photo: Dave Noderer, Rainer Habermann, Quent Herschelman and Alex Funkhouser. Event attracted a large audience and filled the Microsoft conference room where it was held

    Read the article

  • LINQ: Single vs. First

    - by Paulo Morgado
    I’ve witnessed and been involved in several discussions around the correctness or usefulness of the Single method in the LINQ API. The most common argument is that you are querying for the first element on the result set and an exception will be thrown if there’s more than one element. The First method should be used instead, because it doesn’t throw if the result set has more than one item. Although the documentation for Single states that it returns a single, specific element of a sequence of values, it actually returns THE single, specific element of a sequence of ONE value. One you use the Single method in your code you are asserting that your query will result in a scalar result instead of a result set of arbitrary length. On the other hand, the documentation for First states that it returns the first element of a sequence of arbitrary length. Imagine you want to catch a taxi. You go the the taxi line and catch the FIRST one, no matter how many are there. On the other hand, if you go the the parking lot to get your car, you want the SINGLE one specific car that’s yours. If your “query” “returns” more than one car, it’s an exception. Either because it “returned” not only your car or you happen to have more than one car in that parking lot. In either case, you can only drive one car at once and you’ll need to refine your “query”.

    Read the article

  • Automating an SSRS 2008 R2 Report Snapshots and run report with most recent data

    - by Mr Shoubs
    I would like to automate a report snapshot, but there is only an option to take a snapshot in the Report History Tab. All the resources I've found suggest I need to go to processing options and select "Render this report from a snapshot". But I don't want to do that - when I go to a report, I want to get the most recent data. However daily at midnight I'd like to take a snapshot and store it in the history in case I want to compare the reports as of midnight for the last few weeks. Or am I doing this wrong and have to create a subscription instead? Note: this is for an auditing database and has way to much data in to query a range with more than 1 day in it - reports are restricted as such. (1 day has over 1 million rows on it's own).

    Read the article

  • LLBLGen Pro and JSON serialization

    - by FransBouma
    I accidentally removed a reply from my previous blogpost, and as this blog-engine here at weblogs.asp.net is apparently falling apart, I can't re-add it as it thought it would be wise to disable comment controls on all posts, except new ones. So I'll post the reply here as a quote and reply on it. 'Steven' asks: What would the future be for LLBLGen Pro to support JSON for serialization? Would it be worth the effort for a LLBLGenPro user to bother creating some code templates to produce additional JSON serializable classes? Or just create some basic POCO classes which could be used for exchange of client/server data and use DTO to map these back to LLBGenPro ones? If I understand the work around, it is at the expense of losing xml serialization. Well, as described in the previous post, to enable JSON serialization, you can do that with a couple of lines and some attribute assignments. However, indeed, the attributes might make the XML serialization not working, as described in the previous blogpost. This is the case if the service you're using serializes objects using the DataContract serializer: this serializer will give up to serialize the entity objects to XML as the entity objects implement IXmlSerializable and this is a no-go area for the DataContract serializer. However, if your service doesn't use a DataContract serializer, or you serialize the objects manually to Xml using an xml serializer, you're fine. When you want to switch to Xml serializing again, instead of JSON in WebApi, and you have decorated the entity classes with the data-contract attributes, you can switch off the DataContract serializer, by setting a global configuration setting: var xml = GlobalConfiguration.Configuration.Formatters.XmlFormatter; xml.UseXmlSerializer = true; This will make the WebApi use the XmlSerializer, and run the normal IXmlSerializable interface implementation.

    Read the article

  • Windows Boot Manager, linking a 'device' to boot linux

    - by TheCompander
    I'm attempting to boot linux on a UEFI-GPT machine with a Windows Boot Manager (WBM). So far I have installed Archlinux (Arch) with Grub. The grubx64.efi is successfully on my windows boot partition and I can see the option to use it in UEFI-BIOS, selecting this loads grub and I'm able to get into Arch fine. I have noticed that in the Windows Boot Manager, selecting from the splash screen, 'Change defaults or choose other options' 'Choose other options' 'Use a device', shows the boot options as in UEFI-BIOS, in my case grub shows as 'Linux'. Selecting 'Linux' reboots the computer and loads grub then Arch. Is there anyway to use this entry for the device 'Linux' to show directly on the WBM splash screen under the entry for Windows 8.1? Ideally i'd like the 'Arch Linux' to link to the 'Linux' device. Guidance with bcdedit appreciated, thanks in advance.

    Read the article

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • Emulate "Go to Dekstop/Home/etc." behavior in OS X via AppleScript

    - by pattulus
    OS X has build in support for going to certain Folders (Home, Utilities, Desktop, etc.) via a Shortcut. I wanted to emulate this behavior for the the Downloads Folder. The only thing that is missing the script below is that it won’t succeed when no window is opened in the Finder (see Error message). tell application "Finder" activate set target of Finder window 1 to folder "Downloads" of folder "username" of disk "Macintosh HD" end tell Error message: error "Finder got an error: Can’t set Finder window 1 to folder \"Downloads\" of folder \"username\" of disk \"Macintosh HD\"." number -10006 from Finder window 1 It great if you know about some kind of 'if-compliement' that triggers opening the Downloads Folder in case there is no window 1 open in the Finder. Thanks in advance.

    Read the article

  • aliasing "git" ssh login to "gitolite"

    - by Randal Schwartz
    I'm installing gitolite from CentOS packages for my client. The package creates a gitolite user, which will be visible explicitly during a "git clone" operations. The client wants to use "git" and not "gitolite", in case we change to something more fancy later. I'm not very familiar with CentOS, so I don't want to try to build the package myself from source. I'm wondering if there's a way to do one of the following: Trick sshd into treating "git" as "gitolite". Somehow "alias" a new git username to be the same in all ways as the existing gitolite username (perhaps through some complex combinations of useradd). Rename the "gitolite" username to "git" without upsetting later yum update operations Something else that I hadn't thought of I'd appreciate detailed instructions or pointers.

    Read the article

  • Manipulating Human Tasks (for testing) by Mark Nelson

    - by JuergenKress
    A few months ago, while working on a BPM migration, I had the need to look at the status of human tasks, and to manipulate them – essentially to just have a single user take random actions on them at some interval, to help drive a set of processes that were being tested. To do this, I wrote a little utility called httool.  It reuses some of the core domain classes from my custom worklist sample (with minimal changes to make it a remote client instead of a local one). I have not got around to documenting it yet, but it is pretty simple and fairly self explanatory.  So I thought I would go ahead and share it with folks, in case anyone is interested in playing with it. You can get the code from my ci-samples repository on java.net: git clone git://java.net/ci4fmw~ci-samples It is in the httool directory. I do plan to get back to this “one day” and enhance it to be more intelligent – target particular task types, update the payload, follow a set of “rules” about what action to take – so that I can use it for more driving more interesting test scenarios.  If anyone is feeling generous with their time, and interested, please feel free to join the java.net project and hack away to your heart’s content. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Mark Nelson,Human Task,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Formating Columns in Excel created by af:exportCollectionActionListener

    - by Duncan Mills
    The af:exportCollectionActionListener behavior in ADF Faces Rich client provides a very simple way of quickly dumping out the contents or selected rows in a table or treeTable to Excel. However, that simplicity comes at a price as it pretty much left up to Excel how to format the data. A common use case where you have a problem is that of ID columns which are often long numerics. You probably want to represent this data as a string, Excel however will probably have other ideas and render it as an exponent  - not what you intended. In earlier releases of the framework you could sort of work around this by taking advantage of a bug which would allow you to surround the outputText in question with invisible outputText components which provided formatting hints to Excel. Something like this: <af:column headertext="Some wide label">  <af:panelgrouplayout layout="horizontal">     <af:outputtext value="=TEXT(" visible="false">     <af:outputtext value="#{row.bigNumberValue}" rendered="true"/>    <af:outputtext value=",0)" visible="false">   </af:panelgrouplayout> </af:column> However, this bug was fixed and so it can no longer be used as a trick, the export now ignores invisible columns. So, if you really need control over the formatting there are several alternatives: First the more powerful ADF Desktop Integration (ADFdi) package which allows you to build fully transactional spreadsheets that "pull" the data and can update it. This gives you all the control that might need on formatting but it does need specific Excel Add-ins on the client to work. For more information about ADFdi have a look at this tutorial on OTN. Or you can of course look at BI Publisher or Apache POI if you're happy with output only spreadsheets

    Read the article

  • Allen for Umbraco - Upload photos from your iPhone - iPad and iPod Touch

    - by Vizioz Limited
    At last year's UK Umbraco Festival we gave a demo of our alpha version of Allen for Umbraco, at that stage the application only worked on an iPhone and was a very quick prototype to see what people thought.When we returned to our office the next day, we decided if we were going to release Allen for Umbraco into the wild we really should start again from scratch, the main two reasons for this were;First to ensure it was a truly Universal application ( i.e. it can be installed on an iPhone, iPad or iPod ) which looks and behaves differently depending on the device. The second reason was we really wanted the application to be the foundations of more than just image uploading for Umbraco, for this to be the case we ensured the new version was built following proven design patterns and with lots of unit tests so that we can easily extended it.We have lots of plans for future versions of Allen for Umbraco including adding iCloud support to keep all your settings in sync across your multiple Apple devices. We are also working on support for Umbraco 5 which should be release soon.When you download the App and setup your site, make sure you have a look at the Image Resizing settings, by default we have set these to resize your images to 512 pixels wide, however you can choose from a variety of different resizing methods (by Height, Width, Fit within a frame or the full size image).Also, by default when you select a photo you will see that the image is named with it's date and time stamp of when the photograph was taken (or the current date and time if the original date is not stored in your image). If you click on this name you can edit the name of your photo before it is uploaded.Finally, we are really keep to get your feedback, so within the App help section you will find a way to submit Suggestions and if needed, you can send up Support emails from within the App :)We hope you enjoy the first version of Allen for Umbraco and we look forward to bringing you lots of exciting additional functionality in the future!

    Read the article

  • Running a reverse proxy in front of Splunk 4.x

    - by sgerrand
    So, I have previously installed Splunk 3.x behind a reverse proxy and downloaded the latest version (4.0.6 at time of typing) expecting it to be as easy to use as before. Sadly this was not the case. There appears to be some elements which are not being translated correctly through the reverse proxy, causing Splunk to fail. I have used the following configuration in Apache2 to no avail: ServerName monitoringbox.com DocumentRoot /path/to/nowhere ProxyRequests off ProxyPass /splunk http://127.0.0.1:8000/splunk ProxyPassReverse /splunk http://127.0.0.1:8000/splunk Order allow,deny Allow from all Has anyone else had more luck than me in setting up Splunk 4.x behind a reverse proxy?

    Read the article

  • nTop RRD file architecture

    - by Seanny123
    I have a gig of nTop RRD files and I would like to start graphing them with rrdtool (but not with nTop, since I'm hoping to do this with a separate backup of the database as workaround to the impossibility of limiting the RRD files by size), but I don't know how the files are structured. I've tried reading the RRD documentation from SourceForge and the nTop FAQ, but I'm not finding the information I need. Does anyone know of any documentation I should be looking at or how the files are structured? Here https://dl.dropbox.com/u/669437/file%20structure.png is a screenshot of the file structure. At first I thought it was organized by IP address (so the rrd files for address 1.1.2.3 would be stored in folder 1-1-2-3 or even the reverse order), but that doesn't seem to be the case. It isn't organized by MAC address either, although some hosts are saved that way. Any help would be appreciated.

    Read the article

  • Security permissions for remote shared folders

    - by Tomas Lycken
    I have two servers running Windows Server 2003, and I want copy files from one server (A), programmatically with a windows service running under the Local System account, to a shared folder on the other (B). I keep getting "access denied" errors, and I can't figure out what security settings I need to set to open the shared folder for writing. This is what I've done on the recieving end: On A, right-click on the folder to share, choose the tab "Sharing" and select "Share this folder". Set a share name. Click "Permissions", add the group "Everyone" and give it full control. I tried choosing the "Security" tab to give some permissions there as well, but the "Add" dialog only finds local users, despite the fact that B shows up in the "Workgroup computers" dialog. After further inspection, this is the case also for the "Permissions" dialog under the "Sharing" tab (are they the same?).

    Read the article

  • Minty Bug: Build an FM Bug Inside a Mint Container

    - by ETC
    Electronics projects that have real world (and showing off to your friends) potential are the most fun; today we take a look at a clever FM bug design hidden in a mint container. At PyroElectro Projects they wanted to try something new with the whole electronics-in-mint-container genre. They opted to turn a container of Ice Breakers Frost mints (the Ice Breakers response to Altoid Mints, presumably) into a small FM bug. The most clever part of the design is that the container still holds mints. Aside from a small black dot on the back of the case you’d have little reason to believe it was anything buy a box of mints. Check out the video below to see the mint container unpacked and the hidden electronics payload revealed: If you’re interested in the project hit up the link below for additional information. FM Bug Transmitter Mint Box [Pyro Electro Projects via Hack A Day] Latest Features How-To Geek ETC How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? Get the MakeUseOf eBook Guide to Hacker Proofing Your PC Sync Your Windows Computer with Your Ubuntu One Account [Desktop Client] Awesome 10 Meter Curved Touchscreen at the University of Groningen [Video] TV Antenna Helper Makes HDTV Antenna Calibration a Snap Turn a Green Laser into a Microscope Projector [Science] The Open Road Awaits [Wallpaper]

    Read the article

  • Inside the Concurrent Collections: ConcurrentBag

    - by Simon Cooper
    Unlike the other concurrent collections, ConcurrentBag does not really have a non-concurrent analogy. As stated in the MSDN documentation, ConcurrentBag is optimised for the situation where the same thread is both producing and consuming items from the collection. We'll see how this is the case as we take a closer look. Again, I recommend you have ConcurrentBag open in a decompiler for reference. Thread Statics ConcurrentBag makes heavy use of thread statics - static variables marked with ThreadStaticAttribute. This is a special attribute that instructs the CLR to scope any values assigned to or read from the variable to the executing thread, not globally within the AppDomain. This means that if two different threads assign two different values to the same thread static variable, one value will not overwrite the other, and each thread will see the value they assigned to the variable, separately to any other thread. This is a very useful function that allows for ConcurrentBag's concurrency properties. You can think of a thread static variable: [ThreadStatic] private static int m_Value; as doing the same as: private static Dictionary<Thread, int> m_Values; where the executing thread's identity is used to automatically set and retrieve the corresponding value in the dictionary. In .NET 4, this usage of ThreadStaticAttribute is encapsulated in the ThreadLocal class. Lists of lists ConcurrentBag, at its core, operates as a linked list of linked lists: Each outer list node is an instance of ThreadLocalList, and each inner list node is an instance of Node. Each outer ThreadLocalList is owned by a particular thread, accessible through the thread local m_locals variable: private ThreadLocal<ThreadLocalList<T>> m_locals It is important to note that, although the m_locals variable is thread-local, that only applies to accesses through that variable. The objects referenced by the thread (each instance of the ThreadLocalList object) are normal heap objects that are not specific to any thread. Thinking back to the Dictionary analogy above, if each value stored in the dictionary could be accessed by other means, then any thread could access the value belonging to other threads using that mechanism. Only reads and writes to the variable defined as thread-local are re-routed by the CLR according to the executing thread's identity. So, although m_locals is defined as thread-local, the m_headList, m_nextList and m_tailList variables aren't. This means that any thread can access all the thread local lists in the collection by doing a linear search through the outer linked list defined by these variables. Adding items So, onto the collection operations. First, adding items. This one's pretty simple. If the current thread doesn't already own an instance of ThreadLocalList, then one is created (or, if there are lists owned by threads that have stopped, it takes control of one of those). Then the item is added to the head of that thread's list. That's it. Don't worry, it'll get more complicated when we account for the other operations on the list! Taking & Peeking items This is where it gets tricky. If the current thread's list has items in it, then it peeks or removes the head item (not the tail item) from the local list and returns that. However, if the local list is empty, it has to go and steal another item from another list, belonging to a different thread. It iterates through all the thread local lists in the collection using the m_headList and m_nextList variables until it finds one that has items in it, and it steals one item from that list. Up to this point, the two threads had been operating completely independently. To steal an item from another thread's list, the stealing thread has to do it in such a way as to not step on the owning thread's toes. Recall how adding and removing items both operate on the head of the thread's linked list? That gives us an easy way out - a thread trying to steal items from another thread can pop in round the back of another thread's list using the m_tail variable, and steal an item from the back without the owning thread knowing anything about it. The owning thread can carry on completely independently, unaware that one of its items has been nicked. However, this only works when there are at least 3 items in the list, as that guarantees there will be at least one node between the owning thread performing operations on the list head and the thread stealing items from the tail - there's no chance of the two threads operating on the same node at the same time and causing a race condition. If there's less than three items in the list, then there does need to be some synchronization between the two threads. In this case, the lock on the ThreadLocalList object is used to mediate access to a thread's list when there's the possibility of contention. Thread synchronization In ConcurrentBag, this is done using several mechanisms: Operations performed by the owner thread only take out the lock when there are less than three items in the collection. With three or greater items, there won't be any conflict with a stealing thread operating on the tail of the list. If a lock isn't taken out, the owning thread sets the list's m_currentOp variable to a non-zero value for the duration of the operation. This indicates to all other threads that there is a non-locked operation currently occuring on that list. The stealing thread always takes out the lock, to prevent two threads trying to steal from the same list at the same time. After taking out the lock, the stealing thread spinwaits until m_currentOp has been set to zero before actually performing the steal. This ensures there won't be a conflict with the owning thread when the number of items in the list is on the 2-3 item borderline. If any add or remove operations are started in the meantime, and the list is below 3 items, those operations try to take out the list's lock and are blocked until the stealing thread has finished. This allows a thread to steal an item from another thread's list without corrupting it. What about synchronization in the collection as a whole? Collection synchronization Any thread that operates on the collection's global structure (accessing anything outside the thread local lists) has to take out the collection's global lock - m_globalListsLock. This single lock is sufficient when adding a new thread local list, as the items inside each thread's list are unaffected. However, what about operations (such as Count or ToArray) that need to access every item in the collection? In order to ensure a consistent view, all operations on the collection are stopped while the count or ToArray is performed. This is done by freezing the bag at the start, performing the global operation, and unfreezing at the end: The global lock is taken out, to prevent structural alterations to the collection. m_needSync is set to true. This notifies all the threads that they need to take out their list's lock irregardless of what operation they're doing. All the list locks are taken out in order. This blocks all locking operations on the lists. The freezing thread waits for all current lockless operations to finish by spinwaiting on each m_currentOp field. The global operation can then be performed while the bag is frozen, but no other operations can take place at the same time, as all other threads are blocked on a list's lock. Then, once the global operation has finished, the locks are released, m_needSync is unset, and normal concurrent operation resumes. Concurrent principles That's the essence of how ConcurrentBag operates. Each thread operates independently on its own local list, except when they have to steal items from another list. When stealing, only the stealing thread is forced to take out the lock; the owning thread only has to when there is the possibility of contention. And a global lock controls accesses to the structure of the collection outside the thread lists. Operations affecting the entire collection take out all locks in the collection to freeze the contents at a single point in time. So, what principles can we extract here? Threads operate independently Thread-static variables and ThreadLocal makes this easy. Threads operate entirely concurrently on their own structures; only when they need to grab data from another thread is there any thread contention. Minimised lock-taking Even when two threads need to operate on the same data structures (one thread stealing from another), they do so in such a way such that the probability of actually blocking on a lock is minimised; the owning thread always operates on the head of the list, and the stealing thread always operates on the tail. Management of lockless operations Any operations that don't take out a lock still have a 'hook' to force them to lock when necessary. This allows all operations on the collection to be stopped temporarily while a global snapshot is taken. Hopefully, such operations will be short-lived and infrequent. That's all the concurrent collections covered. I hope you've found it as informative and interesting as I have. Next, I'll be taking a closer look at ThreadLocal, which I came across while analyzing ConcurrentBag. As you'll see, the operation of this class deserves a much closer look.

    Read the article

  • OpenGL ES 2 shaders for drawing buildings and roads like Google Maps does

    - by Pris
    I'm trying to create a shader that'll give me an effect similar to what buildings and roads look like on 3D Google Maps. You can see the effect interactively if you enable WebGL at maps.google.com, and I also found a couple of screenshots that illustrate what I'm trying to achieve: Thing I noticed: There's some kind of transparency thing going on with the roads/ground and the buildings, but not between the buildings themselves. It might be that they're rendering the ground and roads after the buildings with the right blend functions to achieve that effect. If you look closely, you'll see parts of the building profiles have an outline. The roads also have nice clean outlines. There are a lot of techniques for outlining things with shaders... but I'm curious to find out what might have been used in this case considering mobile hardware and a large number of entities with outlines (roads and buildings) I'm assuming that for the lighting, some sort of simple diffuse per-vertex shader is being used for the buildings though I could be wrong. I'm especially curious about the 'look' they achieved with buildings (clean, precise outlines/shading). It reminds me a little of what you'd see when designing stuff with CAD applications like SolidWorks: I'd appreciate any advice on achieving this kind of look with ES 2 shaders.

    Read the article

  • Log Location Url Responses of 301 redirects from IIS

    - by James Lawruk
    Is there a way to log 301 redirects returned by IIS with the (1) request Url and the (2) location Url of the response? Something like this: Url, Location /about-us, /about /old-page, /new-page The IIS logs contain the Request Url and the status code (301), but not the location Url of the response. Ideally there would be an additional field in the IIS Log called Location that would be populated when IIS responded with a 301. In my case the source of the redirect could be ISAPI Rewrite Rules, ASP.NET applications, Cold Fusion applications, or IIS itself. Perhaps there is a way to log IIS response data? Thanks for your help.

    Read the article

  • It's an Oracle Linux Wrap: Oracle Openworld 2012

    - by Zeynep Koch
    Are you still recovering from an amazing Oracle OpenWorld experience? 50,000 attendees had access to thousands of sessions, demos, hands-on-labs, networking opportunities, music concerts, and loads of fun. For the Oracle Linux team, this was a week full of many insightful sessions and customer interactions. In case you were unable to attend Oracle OpenWorld or missed some of content presented, here's a compilation of key session presentations, keynotes, and videos.Go to the Oracle OpenWorld content catalog and access all the session presentations. Oracle Openworld Keynote by Edward Screven Oracle's commitment to Open Source by Edward Screven Oracle Linux Interview with Wim Coekaerts Making the most of mainline kernel by Wim Coekaerts Why DTrace and Ksplice have made Oracle Linux 6 popular by W.Coekaerts How partnership between Oracle Linux and Oracle Partners benefits Sysadmins by Michele Resta Hugepages=Huge Performance on Oracle Linux by Greg Marsden Benefits of Kpslice in your Linux Environment by Tim Hill Oracle Linux, Ksplice and MySQL by Lenz Grimmer We also hosted a successful Oracle Linux Pavilion with 11 of our key partners - Beyond Trust, Centrify, Data Intensity, Fujitsu, HP, LSI, Mellanox, Micro Focus, NetApp, QLogic and Teleran showcased their solutions for Oracle Linux and Oracle VM. Here are some videos from the Oracle Linux Pavilion. Centrify covers Oracle Linux solution they offer at Oracle Linux PavilionMellanox talk about their solution at Oracle Linux Pavilion Eric Pan covers Micro Focus products at Oracle Linux Pavilion There's also collection of the keynotes and executive sessions as on-demand videos posted  here . We hope you find this information useful and look forward to seeing at Oracle OpenWorld 2013! ORACLE LINUX TEAM

    Read the article

  • Only show windows 7 Preview Pane (in explorer) when the file has a preview

    - by Jonathan
    I use the preview pane often, especially with pdfs. But when selecting folders or files which don't have previews, or not even selecting anything, the preview pane stays, it's quite big, and I use lots when I have the explorer window maximised on my 1920x1080 monitor in this case it takes up about half my screen, but when I use explorer in a smaller window the preview pane shrinks the cneter folder pane and stays half the size of the window. Is there anyway to only show the preview pane when the file has a preview and then hide it again when the file doesn't or not file is selected. (btw, please don't say about alternate file browsers, as they all look ugly and complicated)

    Read the article

  • Where to find other versions of Opera browser as deb packages?

    - by cipricus
    I used Opera mainly for the Unite feature now to be abandoned. It is missing in v. 12. Some say its features will re-emerge in future extensions etc. Until then, Unite is still accessible in v. 11. Where do I get the v.11 deb? P.S. In fact it seems that opera unite (at least in its older form) is dying while I am editing this question. Access to opera-unite applications from within opera-unite is poor or absent. This issue is obscure to me for now (31.08.2012) because yesterday I have installed v12 in Windows OS (with opera-unite and basic applications - file sharing and media player - already installed) and it is still working (server is working). The v12 Ubuntu version came today without unite, and after installing v11 (which has unite) I could not get applications (file sharing, etc). But they are still available: here and after downloading these files which have te .ua extension, they can be installed by opening them with Opera (v.11) But as opera-unite is no longer supported, it is possible that the server that provides the file sharing etc will soon be unaccessible. Even if that is the case the question should maybe not be closed at it has a general usefulness independently of the unite issue.

    Read the article

< Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >