Search Results

Search found 25946 results on 1038 pages for 'cost based optimizer'.

Page 601/1038 | < Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >

  • How to route traffic through a specific SOCKS proxy on a per-app basis?

    - by GJ.
    I'm running a certain desktop app (actually via AIR if it makes any difference) which doesn't have any built-in proxy configuration settings. I need to get all traffic just from this app directed through a secure SOCKS proxy. This implies I can't use the global network preferences, as these would affect many other apps. Is there any way to force all network communication through a given SOCKS proxy on a per-app basis? It would also be helpful to know if there's a way to perform such routing globally, based on specific IP addresses (as this could allow for some reasonable workaround).

    Read the article

  • How do I configure pfsense as an outbound VPN client?

    - by Avery Chan
    We use pfsense as a router/firewall. Because we're based in China, it is useful for us to have VPN access for all our internal clients. Instead of each individual client connecting to a VPN server stateside, I'd like to configure pfsense as a VPN client and have all the network traffic be routed through it. Most of the posts I've seen regarding pfsense and VPN are concerning connecting to the LAN from outside; this is not what I want to do. Another option would be for an SSH tunnel to be initiated on the pfsense box with the LAN traffic routed through it. How do I configure pfsense to be able to do either of these? One huge caveat is that OpenVPN cannot be used. The solution I am looking for needs to use one of the other VPN protocols.

    Read the article

  • Prepared statement alternatives for this middle-man program?

    - by user2813274
    I have an program that is using a prepared statement to connect and write to a database working nicely, and now need to create a middle-man program to insert between this program and the database. This middle-man program will actually write to multiple databases and handle any errors and connection issues. I would like advice as to how to replicate the prepared statements such as to create minimal impact to the existing program, however I am not sure where to start. I have thought about creating a "SQL statement class" that mimics the prepared statement, only that seems silly. The existing program is in Java, although it's going to be networked anyways so I would be open to writing it in just about anything that would make sense. The databases are currently MySQL, although I would like to be open to changing the database type in the future. My main question is what should the interface for this program look like, and does doing this even make sense? A distributed DB would be the ideal solution, but they seem overly complex and expensive for my needs. I am hoping to replicate the main functionality of a distributed DB via this middle-man. I am not too familiar with sql-based servers distributing data (or database in general...) - perhaps I am fighting an uphill battle by trying to solve it via programming, but I would like to make an attempt at least.

    Read the article

  • FTP client that supports 2 concurrent FTP sessions

    - by oninea
    I'm looking for an FTP client that can connect to two different FTP servers at the same time and allow file transfer or synchronization between those two servers. Basically what I want to achieve is to transfer/synchronize files between 2 different sites from my local machine. Are there any clients around that support this functionality? If there are none, is there an alternative to achieve this? I've taken a look at net2ftp, a web based FTP client, which provides almost the same functionality that I need. What I'm looking for though is a desktop app. Any ideas?

    Read the article

  • What is the right way to Windows 7/Ubuntu 10.10 Dual-Triple Boot Partitioning for Laptop OEM?

    - by Denja
    Hi Linux Community, I find my self struggling with the ever slow and buggy windoze OS once again. It's Time to change with the Ubuntu 10.10 64bit as a really faster Operating System. My Hard Disk laptop as a RECOVERY and HP_TOOLS partition they are both Primary. I Have the System Recovery DVD for Windows 64bit should anything happen. Here's the layout I used with windows before: * (C:) Windows 7 system partition NTFS - 284,89GB (Primary,Boot,Pagefile,Dump) * HP_TOOLS system partition FAT32 - 99MB (Primary) * (D:) RECOVERY partition NTFS - 12,90GB (Primary) * SYSTEM partition NTFS 199MB (Primary) Here's the layout I want to make based on your answers * (C:) Windows 7 system partition NTFS - 60GB (Primary) (sda1) * (D:) Windows DATA partition (user files) NTFS - 120GB(Primary)(sda2);wanna share with Linux * Linux root Ext4 - 100GB (Primary)(sda3) (Ubuntu 10.10 64bit) * Linux swap swap- RAM size, 3GB (sda4) * Linux root Ext3- 15,9GB (Extended)(sda5) (OpenSuse or Puppy) Here is my New Ubuntu 10.10 64bit layout in use now: * SYSTEM partition NTFS 199MB (Primary) (sda1) **Partition 1 does not end on cylinder boundary.(?)** * (C:) Windows 7 system partition NTFS - 90GB (Primary) (sda2) * (D:) Windows 7 RECOVERY partition NTFS - 12,90GB (Primary) (sda3) * Linux system partition EXTENDED - 195GB (Logical) * Linux root Ext4- 10GB (Extended) (sda5) * Linux home Ext3- 185GB (Extended) (sda6) I didn't know if I could wipe all previous partitions when i installed Ubuntu because of the RECOVERY partition so I just made the space for my extended partition by deleting the HP_TOOLS (Fat32). By doing this I managed to make and successfully install Ubuntu 64 but I couldn't actually make the partition for the swap or a third Linux OS. Question 1: What is the right way to Windows 7/Ubuntu 10.10 Dual-Triple Boot Partitioning for Laptop OEM?? Thank you in advance for your advises and suggestions and Happy New Year to All!!

    Read the article

  • syslog ip ranges to specific files using `rsyslog`

    - by Mike Pennington
    I have many Cisco / JunOS routers and switches that send logs to my Debian server, which uses rsyslogd. How can I configure rsyslogd to send these router / switch logs to a specific file, based on their source IP address? I do not want to pollute general system logs with these entries. For instance: all routers in Chicago (source ip block: 172.17.25.0/24) to only log to /var/log/net/chicago. all routers in Dallas (source ip block 172.17.27.0/24) to only log to /var/log/net/dallas. Finally, these logs should be rotated daily for up to 30 days and compressed. NOTE: I am answering my own question

    Read the article

  • Changes to the LINQ-to-StreamInsight Dialect

    - by Roman Schindlauer
    In previous versions of StreamInsight (1.0 through 2.0), CepStream<> represents temporal streams of many varieties: Streams with ‘open’ inputs (e.g., those defined and composed over CepStream<T>.Create(string streamName) Streams with ‘partially bound’ inputs (e.g., those defined and composed over CepStream<T>.Create(Type adapterFactory, …)) Streams with fully bound inputs (e.g., those defined and composed over To*Stream – sequences or DQC) The stream may be embedded (where Server.Create is used) The stream may be remote (where Server.Connect is used) When adding support for new programming primitives in StreamInsight 2.1, we faced a choice: Add a fourth variety (use CepStream<> to represent streams that are bound the new programming model constructs), or introduce a separate type that represents temporal streams in the new user model. We opted for the latter. Introducing a new type has the effect of reducing the number of (confusing) runtime failures due to inappropriate uses of CepStream<> instances in the incorrect context. The new types are: IStreamable<>, which logically represents a temporal stream. IQStreamable<> : IStreamable<>, which represents a queryable temporal stream. Its relationship to IStreamable<> is analogous to the relationship of IQueryable<> to IEnumerable<>. The developer can compose temporal queries over remote stream sources using this type. The syntax of temporal queries composed over IQStreamable<> is mostly consistent with the syntax of our existing CepStream<>-based LINQ provider. However, we have taken the opportunity to refine certain aspects of the language surface. Differences are outlined below. Because 2.1 introduces new types to represent temporal queries, the changes outlined in this post do no impact existing StreamInsight applications using the existing types! SelectMany StreamInsight does not support the SelectMany operator in its usual form (which is analogous to SQL’s “CROSS APPLY” operator): static IEnumerable<R> SelectMany<T, R>(this IEnumerable<T> source, Func<T, IEnumerable<R>> collectionSelector) It instead uses SelectMany as a convenient syntactic representation of an inner join. The parameter to the selector function is thus unavailable. Because the parameter isn’t supported, its type in StreamInsight 1.0 – 2.0 wasn’t carefully scrutinized. Unfortunately, the type chosen for the parameter is nonsensical to LINQ programmers: static CepStream<R> SelectMany<T, R>(this CepStream<T> source, Expression<Func<CepStream<T>, CepStream<R>>> streamSelector) Using Unit as the type for the parameter accurately reflects the StreamInsight’s capabilities: static IQStreamable<R> SelectMany<T, R>(this IQStreamable<T> source, Expression<Func<Unit, IQStreamable<R>>> streamSelector) For queries that succeed – that is, queries that do not reference the stream selector parameter – there is no difference between the code written for the two overloads: from x in xs from y in ys select f(x, y) Top-K The Take operator used in StreamInsight causes confusion for LINQ programmers because it is applied to the (unbounded) stream rather than the (bounded) window, suggesting that the query as a whole will return k rows: (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) The use of SelectMany is also unfortunate in this context because it implies the availability of the window parameter within the remainder of the comprehension. The following compiles but fails at runtime: (from win in xs.SnapshotWindow() from x in win orderby x.A select win).Take(k) The Take operator in 2.1 is applied to the window rather than the stream: Before After (from win in xs.SnapshotWindow() from x in win orderby x.A select x.B).Take(k) from win in xs.SnapshotWindow() from b in     (from x in win     orderby x.A     select x.B).Take(k) select b Multicast We are introducing an explicit multicast operator in order to preserve expression identity, which is important given the semantics about moving code to and from StreamInsight. This also better matches existing LINQ dialects, such as Reactive. This pattern enables expressing multicasting in two ways: Implicit Explicit var ys = from x in xs          where x.A > 1          select x; var zs = from y1 in ys          from y2 in ys.ShiftEventTime(_ => TimeSpan.FromSeconds(1))          select y1 + y2; var ys = from x in xs          where x.A > 1          select x; var zs = ys.Multicast(ys1 =>     from y1 in ys1     from y2 in ys1.ShiftEventTime(_ => TimeSpan.FromSeconds(1))     select y1 + y2; Notice the product translates an expression using implicit multicast into an expression using the explicit multicast operator. The user does not see this translation. Default window policies Only default window policies are supported in the new surface. Other policies can be simulated by using AlterEventLifetime. Before After xs.SnapshotWindow(     WindowInputPolicy.ClipToWindow,     SnapshotWindowInputPolicy.Clip) xs.SnapshotWindow() xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.PointAlignToWindowEnd) xs.TumblingWindow(     TimeSpan.FromSeconds(1)) xs.TumblingWindow(     TimeSpan.FromSeconds(1),     HoppingWindowOutputPolicy.ClipToWindowEnd) Not supported … LeftAntiJoin Representation of LASJ as a correlated sub-query in the LINQ surface is problematic as the StreamInsight engine does not support correlated sub-queries (see discussion of SelectMany). The current syntax requires the introduction of an otherwise unsupported ‘IsEmpty()’ operator. As a result, the pattern is not discoverable and implies capabilities not present in the server. The direct representation of LASJ is used instead: Before After from x in xs where     (from y in ys     where x.A > y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, (x, y) => x.A > y.B) from x in xs where     (from y in ys     where x.A == y.B     select y).IsEmpty() select x xs.LeftAntiJoin(ys, x => x.A, y => y.B) ApplyWithUnion The ApplyWithUnion methods have been deprecated since their signatures are redundant given the standard SelectMany overloads: Before After xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count()) xs.GroupBy(x => x.A).SelectMany(     gs =>     from win in gs.SnapshotWindow()     select win.Count()) xs.GroupBy(x => x.A).ApplyWithUnion(gs => from win in gs.SnapshotWindow() select win.Count(), r => new { r.Key, Count = r.Payload }) from x in xs group x by x.A into gs from win in gs.SnapshotWindow() select new { gs.Key, Count = win.Count() } Alternate UDO syntax The representation of UDOs in the StreamInsight LINQ dialect confuses cardinalities. Based on the semantics of user-defined operators in StreamInsight, one would expect to construct queries in the following form: from win in xs.SnapshotWindow() from y in MyUdo(win) select y Instead, the UDO proxy method is referenced within a projection, and the (many) results returned by the user code are automatically flattened into a stream: from win in xs.SnapshotWindow() select MyUdo(win) The “many-or-one” confusion is exemplified by the following example that compiles but fails at runtime: from win in xs.SnapshotWindow() select MyUdo(win) + win.Count() The above query must fail because the UDO is in fact returning many values per window while the count aggregate is returning one. Original syntax New alternate syntax from win in xs.SnapshotWindow() select win.UdoProxy(1) from win in xs.SnapshotWindow() from y in win.UserDefinedOperator(() => new Udo(1)) select y -or- from win in xs.SnapshotWindow() from y in win.UdoMacro(1) select y Notice that this formulation also sidesteps the dynamic type pitfalls of the existing “proxy method” approach to UDOs, in which the type of the UDO implementation (TInput, TOuput) and the type of its constructor arguments (TConfig) need to align in a precise and non-obvious way with the argument and return types for the corresponding proxy method. UDSO syntax UDSO currently leverages the DataContractSerializer to clone initial state for logical instances of the user operator. Initial state will instead be described by an expression in the new LINQ surface. Before After xs.Scan(new Udso()) xs.Scan(() => new Udso()) Name changes ShiftEventTime => AlterEventStartTime: The alter event lifetime overload taking a new start time value has been renamed. CountByStartTimeWindow => CountWindow

    Read the article

  • Remote Desktop over SSH SOCKS proxy to bypass firewall

    - by scrumpyjack
    Hi folks, I'm trying to connect to a Windows server from my Mac using RDC2.1 for Mac. The problem is the server I need to connect to is guarded by the evil dragon - IP-based access control on a completely separate network. I have an IP I can get in on, but it's at my office (i.e. a completely separate network). Because that network isn't set up for VPN, I've set up a SOCKS proxy through an SSH tunnel (which is all working fine). (SSH proxy) Me (on my Mac) ----------> Office Linux box ----> Windows server (home network) (office network) (other network) From my Linux server in my office (the SSH server) I can telnet to port 3389 on the Windows server, no problem. But from my Mac I can't get so much as a squeak out of it. Any ideas?

    Read the article

  • Centralize proxy settings for all the application on my workstation

    - by Leonardo
    As a consultant, I used to work at different clients premises. It happens frequently that most of them has specific proxy settings, but not all the application I have installed on my laptop get settings from system preferences, for allowing me to change settings in one place. Some of them bypass system preferences completely, proposing their own mask for entering specific data such as username, host and password. I am looking for a convenient and not much intrusive way to share a common access point on which I could enter data, and maybe persist them. An 'automatic' switching would be ideal, for example based on some network identification, but there's no problem for me to enter data manually. I am not an IT expert, but to explain myself clearly, I am looking for a solution like .pac file is for browsers. Relevant OS I am using are MacOSX and Linux (Ubuntu).

    Read the article

  • Book Review&ndash;Getting Started With OAuth 2.0

    - by Lori Lalonde
    Getting Started With OAuth 2.0, by Ryan Boyd, provides an introduction to the latest version of the OAuth protocol. The author starts off by exploring the origins of OAuth, along with its importance, and why developers should care about it. The bulk of this book involves a discussion of the various authorization flows that developers will need to consider when developing applications that will incorporate OAuth to manage user access and authorization. The author explains in detail which flow is appropriate to use based on the application being developed, as well as how to implement each type with step-by-step examples. Note that the examples in the book are focused on the Google and Facebook APIs. Personally, I would have liked to see some examples with the Twitter API as well. In addition to that, the author also discusses security considerations, error handling (what is returned if the access request fails), and access tokens (when are access tokens refreshed, and how access can be revoked). This book provides a good starting point for those developers looking to understand what OAuth is and how they can leverage it within their own applications. The book wraps up with a list of tools and libraries that are available to further assist the developer in exploring the APIs supporting the OAuth specification. I highly recommend this book as a must-read for developers at all levels that have not yet been exposed to OAuth. The eBook format of this book was provided free through O'Reilly's Blogger Review program. This book can be purchased from the O'Reilly book store at: : http://shop.oreilly.com/product/0636920021810.do

    Read the article

  • Software Design Idea for multi tier architecture

    - by Preyash
    I am currently investigating multi tier architecture design for a web based application in MVC3. I already have an architecture but not sure if its the best I can do in terms of extendability and performance. The current architecure has following components DataTier (Contains EF POCO objects) DomainModel (Contains Domain related objects) Global (Among other common things it contains Repository objects for CRUD to DB) Business Layer (Business Logic and Interaction between Data and Client and CRUD using repository) Web(Client) (which talks to DomainModel and Business but also have its own ViewModels for Create and Edit Views for e.g.) Note: I am using ValueInjector for convering one type of entity to another. (which is proving an overhead in this desing. I really dont like over doing this.) My question is am I having too many tiers in the above architecure? Do I really need domain model? (I think I do when I exposes my Business Logic via WCF to external clients). What is happening is that for a simple database insert it (1) create ViewModel (2) Convert ViewModel to DomainModel for Business to understand (3) Business Convert it to DataModel for Repository and then data comes back in the same order. Few things to consider, I am not looking for a perfect architecure solution as it does not exits. I am looking for something that is scalable. It should resuable (for e.g. using design patterns ,interfaces, inheritance etc.) Each Layers should be easily testable. Any suggestions or comments is much appriciated. Thanks,

    Read the article

  • Ubuntu Lenove OCZ Agility3 - No Grub after install

    - by Michael
    I've tried a dualboot (Win7 + Ubuntu) installation it on a Lenovo E330 with Agility3 240 Gigs... Conclusions: Ubuntu:: Ubuntu 12.04 x86_64 ( 21.06.2012 ) is not able to install grub in a bootable way. Grub will be installed and does update-grub during Installation, recognizes also the Win OS. But after a restart it boots directly to Windows.This is directly connected to the OCZ Agility3. On a good old fashion harddisk (those with the moving parts) Ubuntu is capable to install grub with no problem in a bootable manner. PinguyOS:: PinguyOS 12.04 LTS x86_64 (which is a Ubuntu based distro) is able to handle the Grub installation on OCZ Agility3. However they both use Grub 1.99... What i did:: Installed Windows. Installed Ubuntu. Installed PinguyOS. Grub Updates:: Grub updates are only through Pinguy OS possible, this means you have to edit the Ubuntu Grub entries manually after Kernelupdates on Ubuntu, in the PiguyOS sytem.. What i've already tried: Firmwareupgrade OCZ (livestick, successfull) Install Ubuntu Grub to sda Install Ubuntu Grub to sdc (Ubuntu Partion) Install Ubuntu Grub to /boot update-grub manually after install restore grub any ideas appreciated..

    Read the article

  • Modern open source NIDS/HIDS and consoles?

    - by MattC
    Years back we set up an IDS solution by placing a tap in front of our exterior firewall, piping all the traffic on our DS1 through an IDS box and then sending the results off to a logging server running ACiD. This was around 2005-ish. I've been asked to revamp the solution and expand on it and looking around, I see that the last release of ACiD was from 2003 and I can't seem to find anything else that seems even remotely up-to-date. While these things may be feature complete, I worry about library conflicts, etc. Can anyone give me suggestions for a Linux/OpenBSD based solution using somewhat modern tools? Just to be clear, I know that Snort is still actively developed. I guess I'm more in the market for a modern open-source web console to consolidate the data. Of course if people have great experiences with IDS' other than Snort I'm happy to hear about it.

    Read the article

  • Unable to use Maya animation with scripts when imported to Unity

    - by keshk
    I am testing to import Maya animation over to Unity. I set up a simple cylinder with 2 bones and an IK handle. Made a simple animation where the cylinder bends and goes back to straight position over 24 frames. Following that, I selected everything and baked, all bones,ik,(animation by selecting all at the graph editor) and even the cylinder. I saved the scene and then select all and export as FBX with animation and bake checked. In unity imported it and at the preview able to see the animation. When I load the model into scene and play (after assigning the controller), able to see animation too. But now when I try to script it and control the animation, nothing happens. Even to test, I tried the following under the Update method. if(animation.isPlaying) Debug.Log("Animation Works"); else Debug.Log("Animation not working"); The bool doesn't even return true nor false. My animation is called "bend", thus just for try I did the following and nothing happens. animation.Play("bend"); Can please advice based on my steps, am I missing something. Do I need to add the controller or is that an unnecessary step? Did I screw up on the Maya part or the Unity part. Thanks for help.

    Read the article

  • QuickTime X incorrect aspect ratio for H.264 video

    - by Adam Robinson
    I'm running Snow Leopard and have a serious issue with QuickTime X. I have a Samsung HMX-H100N/XAA camcorder that records H.264 video in either 720p or 1080i. In either of these resolutions, QuickTime X (and, by extension, all QuickTime-associated applications like FCP, iMovie, etc.) displays an incorrect aspect ratio for all video produced by this camcorder. For example, 720p video is reported as being 1280x720 in the movie inspector (which is normal), but the displayed size is always at an aspect ratio of something like 63:20 (never heard of such a ratio) with sizes like 1700x539. If I open the video in QuickTime 7 player on the same computer, it is displayed correctly. If I process the video through something like MPEG Streamclip to transcode it, it displays correctly. As it stands right now I have to transcode all of my video in order to use it in any iLive (or other QT-based application) unless I want it to look ridiculous. I've tried installing Perian, but that seemed to have no effect.

    Read the article

  • I know how to program, and how to learn how to program, but how/where do you learn how to make systems properly?

    - by Ryan
    There are many things that need to be considered when making a system, let's take for example a web based system where users log in and interact with each other, creating and editing content. Now I have to think about security, validation (I don't even think I am 100% sure what that entails), "making sure users don't step on each others feet" (term for this?), preventing errors in many cases, making sure database data doesn't become problematic through unexpected... situations? All these things I don't know how or where to learn, is there a book on this kind of stuff? Like I said there seems to be a huge difference between writing code and actually writing the right code, know what I mean? I feel like my current programming work lacks much of what I have described and I can see the problems it causes later, and then the problems are much harder to solve because data exists and people are using it. So can anyone point me to books or resources or the proper subset of programming(?) for this type of learning? PS: feel free to correct my tags, I don't know what I am talking about. Edit: I assume some of the examples I wrote apply to other types of systems too, I just don't know any other good examples because I've been mostly involved in web work.

    Read the article

  • Windows Azure Event

    - by Blog Author
    Get cloud ready with Windows Azure The cloud is everywhere and here at Microsoft we’re flying high with our cloud computing release, Windows Azure. As most of you saw at the Professional Developers Conference, the reaction to Windows Azure has been nothing short of “wow” – and based on your feedback, we’ve organized this special, all-day Windows Azure Firestarter event to help you take full advantage of the cloud. Maybe you've already watched a webcast, attended a recent MSDN Event on the topic, or done your own digging on Azure. Well, here's your chance to go even deeper. This one-of-a-kind event will focus on helping developers get ‘cloud ready’ with concrete details and hands-on tactics. We’ll start by revealing Microsoft’s strategic vision for the cloud, and then offer an end-to-end view of the Windows Azure platform from a developer’s perspective. We’ll also talk about migrating your data and existing applications (regardless of platform) onto the cloud. We’ll finish up with an open panel and lots of time to ask questions. Following this event, please join us for an engaging conversation about any and all Cloud Computing topics. This FREE event is hosted by Northwest Cloud, the cloud agnostic community group, and sponsored by Microsoft. http://www.nwcloud.org/redmond/2010-04-06

    Read the article

  • Problems with Ubuntu and AMD A10-4655M APU

    - by Robert Hanks
    I have a new HP Sleekbook 6z with AMD A10-4655M APU. I tried installing Ubuntu with wubi--the first attempt ended up with a 'AMD unsupported hardware' watermark that I wasn't able to remove (the appeared when I tried to update the drivers as Ubuntu suggested) On the second attempted install Ubuntu installed (I stayed away from the suggested drivers) but the performance was extremely poor----as in Windows Vista poor. I am not sure what the solution is--if I need to wait until there is a kernel update with Ubuntu or if there are other solutions--I realise this is a new APU for the market. I would love to have Ubuntu 12.04 up and running--Windows 7 does very well with this new processor so Ubuntu should, well, be lightening speed. The trial on the Sleekbook with Ubuntu 12.10 Alpha 2 release was a complete failure. I created a bootable USB. By using either the 'Try Ubuntu' or 'Install Ubuntu' options resulted in the usual purple Ubuntu splash screen, followed by nothing...as in a black screen without any hint of life. Interestingly one can hear the Ubuntu intro sound. In case you are wondering, this same USB was trialed subsequently on another computer with and Intel Atom Processor. Worked flawlessly. Lastly the second trial on the Sleekbook resulted in the same results as the first paragraph. Perhaps 12.10 Beta will overcome this issue, or the finalised 12.10 release in October. I don't have the expertise to know what the cause of the behaviour is-the issue could be something else entirely. Sadly, the Windows 7 performance is very good with this processor-very similar and in some instances better to the 2nd generation Intel i5 based computer I use at my workplace. Whatever the cause is for the performance with Ubuntu 12.04 or 12.10 Alpha 2, the situation doesn't bode well for Ubuntu. Ubuntu aside, the HP Sleekbook is a good performer for the price. I am certain once the Ubuntu issue is worked on and solutions arise, the Ubuntu performance will probably be better than ever.

    Read the article

  • Hyperion EPM 11.1.2.3 Webcast Tutorials

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} These LIVE presentation Webcast Tutorials for Partners will be delivered in August 2013: Oracle Hyperion Planning on Exalytics In-Memory Machine - August 6, 2013 Oracle Hyperion Tax Provision - August 8, 2013 Oracle Planning and Budgeting Cloud Service - August 13, 2013 Go here for more details and to register for these. There are also new updated Webcast Tutorials for Oracle Partners in our EPM 11.1.2.3 Update Series: Oracle Hyperion Planning 11.1.2.3 (PS3) Oracle Hyperion Calculation Manager 11.1.2.2 Refresher and 11.1.2.3 (PS3) NEW Oracle Data Relationship Management 11.1.2.3 (PS3) NEW Oracle Hyperion Financial Data Quality Management 11.1.2.3 (PS3) NEW Oracle Hyperion Financial Close Suite 11.1.2.3 (PS3) NEW Oracle Hyperion Profitability & Cost Management 11.1.2.3 (PS3) Introducing Oracle Data Relationship Governance (DRG) Also note new content for Oracle BI Applications 11g with ODI: NEW Overview and Architecture of Oracle BI Applications 11.1.1.7.1 for ODI NEW Configuring Oracle BI Applications 11.1.1.7.1 for ODI These are all part of the compilation of Oracle BI/EPM online tutorials and webinars for Partners, where you can find many topics are covered. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Where can I find a list of the highest resolution monitors for sale?

    - by speedmetal
    I am always on the lookout for the latest and greatest monitors, but for some reason, I have never found a good resource for this information. And while we know months in advance what new processor will be released, it doesn't seem like we ever know what new monitors will be released until they are. I am tempted to buy a Dell 3008WFP, but since the 30" 2560x1600 monitors have been out for 6 years, I would expect something better is about to be released. Where can I find out what is available in the high resolution / widescreen market and what is soon to be released? EDIT: I did finally find a resource for this information: Comprehensive List of IPS Based LCD Monitors However, if anyone finds another similar or possibly better resource, I will give it a correct answer mark. Thanks!

    Read the article

  • Simplest way to render image over top of another with another image used as mask in OpenGL?

    - by Adam Naylor
    The effect I'm looking for is to have a single large background image that is always visible (at full alpha) and then show a second image (what I call a light map or specular map) that is partially shown over the top based on a third image (which is effectively a mask). The effect is similar to this effect except instead of simply darkening or lightening the background image using the third image it needs to mask the second without effecting the first at all. The third image is the only one that moves therefore hard baking the third images alpha into the second image isn't an option. If my explanation isn't clear I'll provide visual examples when I have more time. I'd prefer not to go down a shader route as I haven't taught myself this area yet so unless I have too I'd rather try to achieve this with simple alpha blending. Happy to use a shader approach. Cheers. Additional These third images are obviously light sources being cast onto the first image showing the specular information from the second image to simulate the light 'shining' off the objects in the first image. The solution I implement will need to allow two light sources to potentially overlap so my current thoughts are that the alpha values of the two images will need to be combined (Added?) to produce a final image which masks the second image? Don't worry about things like coloured lights. For this technique the lights are all considered white.

    Read the article

  • OPN Specialized Latest News (15th November)

    - by swalker
    HELPING YOU TO SPECIALIZE WebCenter Implementation Specialist Exam Preparation Webcasts: WebCenter Content And WebCenter Portal Oracle Partner Network would like to invite you to Refresh Courses for WebCenter Content and WebCenter Portal, to help partners to prepare for the WebCenter Implementation Specialist EXAMS. This is a 3 hours intensive refresher partner-only training session, providing attendees with an overview of WebCenter Content and WebCenter Portal functions and related topics. After the refresher part you will be able to take the relevant Implementation Specialist EXAM depending on your personal focus. NOTE: This is only suitable for experienced WebCenter Content or WebCenter Portal practitioners Who should attend? Partner Consultants who want to become an Oracle WebCenter Content or a WebCenter Portal Certified Implementation Specialist or both, that will help them to differentiate themselves in front of customers and support their Companies to become Specialized. Webcast Details: Click here to read more... Specialized Partners Only! New Service to Promote Your Events The Partner Event Publisher has just been made available to all specialized partners in EMEA.  Partners now have the opportunity to publish their events to the Oracle.com/events site and spread the word on their upcoming live in-person and/or live webcast events. Click here to read more information and watch a short video demo. VADs Get Specialized Effective November 1, 2011 , VADs, with a valid Value Added Distributor Agreement will no longer be required to meet customer reference requirements outlined in the business criteria section in order to become specialized. VADs must continue meet all other business and competency criteria set forth in the applicable Knowledge Zone prior to specialization approval. New Certification Pillar Axiom 600 Storage System Your opportunity to take the Pillar Axiom 600 Storage System Essentials (1Z0-581) Exam is vailable now in beta. Pass the exam so you can become a Pillar Axiom 600 Storage Systems Implementation Specialist! Free vouchers are available for Oracle Partners! If you would like to receive a free Beta exam voucher, please send your request to [email protected] and include your name, business email address, company, and the Exam name Pillar Axiom 600 Storage System Essentials Beta exam. New Certification Available: Oracle Utilities Customer Care and Billing Oracle Utilities Customer Care and Billing 2 Essentials (1Z0-562) is a solution designed to help you meet market windows and regulatory deadlines while enjoying a low total cost of ownership and a high return on investment. Take the exam now to become an  Oracle Utilities Customer Care and Billing 2 Essentials Implementation Specialists. MEASURING YOUR SUCCESS We had 1674 Specialized Partners covering 5364 Specializations. Please note that due to OPN contract renewals at any given point in time there are valid Specialized Partners and Specializations which are temporarily not captured in the total statistics. An incremental 1961 individuals were accredited as Implementation Specialists giving an EMEA cumulative total of 9598 Implementation Specialists 26 ISVs obtained one or more Ready's, for a total of 53 Ready's Don't forget! You can submit your own press releases to Oracle! Every time you achieve specialization we'd like to support you getting the message out! Press guidelines and a submission link can be found on the OPN Portal here.

    Read the article

  • Personal Software Process (PSP1)

    - by gentoo_drummer
    I'm trying to figure out an exercise but it doesn't really makes to much sense.. I'm not asking someone to provide the solution. just to try and analyse what needs to be done in order to solve this. I'm trying to understand which PSP 1.0 1.1 process I should use. PROBE? Or something else? I would greatly appreciate some help on this one from someone that has experience with the Personal Software Process Methodology.. Here is the question. For the reference case (“code1.c”), the following s/w metrics are provided: man-hours spent in implementation phase (per-module): 2,7 mh/file man-hours spent in testing phase (per-module): 4,3 mh/file estimated number of bugs remaining (per-module): 0,3 errors/function, 4 errors/module (remaining) Based on the corresponding values provided for the reference case, each of the following tasks focus on some s/w metrics to be estimated for the test case (“code2.c”): [25 marks] (estimated) man-hours required in implementation phase (per-module) [8 marks] (estimated) man-hours required in testing phase (per-module) [8 marks] (estimated) number of bugs remaining at the end of testing phase (per-module) [9 marks] Tasks 4 through 6 should use the data provided for the reference case within the context of Personal Software Process level-1 (PSP-1), using them as a single-point historic data log. Specifically, the same s/w metrics are to be estimated for the test case (“code2.c”), using PSP as the basic estimation model. In order to perform the above listed tasks, students are advised to consider all phases of the PSP software development process, especially at levels PSP0 and PSP1. Both cases are to be treated as separate case-studies in the context of classic s/w development.

    Read the article

  • Is ActionScript 3 used by Serious Indie Developers?

    - by Puedes
    This question is for dedicated independent game developers: My dream is to be a game developer. I am a senior in high school who has taken Computer Science for all four years. I have used Java the whole time, but last year I started using PHP and ActionScript 3 (with Flixel). I also used Game Maker for a brief period. I apologize for this, I wanted to get that out of the way and clarify the fact that I have experience of some kind with game development. I am stuck at the moment because I don't quite know what language to use to develop games at a professional level. I am seriously interested in becoming a dedicated game developer, but this issue is really bothering me. I would like to know what the best option would be for my case, based on your experiences. Any advice is appreciated. Things to consider: I am only interested in making 2D games (I am not worried about 3D support) It would be ideal to use something that can be ported to multiple platforms (so as not to run into this problem later) I can't seem to figure out what the industry likes to use So far, this is what I have: I can't decide if it would be wise to stick with ActionScript 3, or move to C++ I know Flash would be for browser games, but what if I want to make a downloadable game, like Plants Vs. Zombies or Super Crate Box? Would Flash be a smart choice for standalone games, or did they use something else? Thank you for reading this, as I would like to stop worrying about this and make some games! Also, I hope this wasn't all over the place :) tl;dr Should I move ahead with AS3 or use something else i.e. C++

    Read the article

  • Per-vertex animation with VBOs: Stream each frame or use index offset per frame?

    - by charstar
    Scenario Meshes are animated using either skeletons (skinned animation) or some form of morph targets (i.e. per-vertex key frames). However, in either case, the animations are known in full at load-time, that is, there is no physics, IK solving, or any other form of in-game pose solving. The number of character actions (animations) will be limited but rich (hand-animated). There may be multiple characters using a each mesh and its animations simultaneously in-game (they will be at different poses/keyframes at the same time). Assume color and texture coordinate buffers are static. Goal To leverage the richness of well vetted animation tools such as Blender to do the heavy lifting for a small but rich set of animations. I am aware of additive pose blending like that from Naughty Dog and similar techniques but I would prefer to expend a little RAM/VRAM to avoid implementing a thesis-ready pose solver. I would also like to avoid implementing a key-frame + interpolation curve solver (reinventing Blender vertex groups and IPOs). Current Considerations Much like a non-shader-powered pose solver, create a VBO for each character and copy vertex and normal data to each VBO on each frame (VBO in STREAMING). Create one VBO for each animation where each frame (interleaved vertex and normal data) is concatenated onto the VBO. Then each character simply has a buffer pointer offset based on its current animation frame (e.g. pointer offset = (numVertices+numNormals)*frameNumber). (VBO in STATIC) Known Trade-Offs In 1 above: Each VBO would be small but there would be many VBOs and therefore lots of buffer binding and vertex copying each frame. Both client and pipeline intensive. In 2 above: There would be few VBOs therefore insignificant buffer binding and no vertex data getting jammed down the pipe each frame, but each VBO would be quite large. Are there any pitfalls to number 2 (aside from finite memory)? Are there other methods that I am missing?

    Read the article

< Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >