Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 403/916 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • USB mass storage couldn't get mounted

    - by revo
    It's my android phone SD card which was indicated damaged by android yesterday night, out of the blue! I put it directly to a USB port with a USB SD card holder case, so in that way I can recover it with TestDisk, which I had experienced before on a similar situation. I also noticed that there is a change in file system and capacity: File System : RAW Capacity : 0 (unknown capacity) Also TestDisk doesn't show it on its partitions list. A 2 GB SD card is not that important in price but I've a lot of files and medias which I need them. Used a mini card reader, TestDisk displayed it on its list but a quick search and or a deep search doesn't have any results No partition found or selected for recovery and then I should quit the program. Your help is appreciated. Update #2 lsusb output: Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 002: ID 04f3:0234 Elan Microelectronics Corp. Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 002: ID 058f:6366 Alcor Micro Corp. Multi Flash Reader Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 007 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 006 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    Read the article

  • Which is generally considered faster or best practice: symlinks or Apache aliases?

    - by Christopher W. Allen-Poole
    I'm curious as to what most people's views are on this subject. Personally, I will almost always prefer symlinks unless I have no other option -- I find that it is far more obvious when someone is navigating the file system, but, on the other hand aliasing is more platform independent. Windows XP, for example, doesn't have anything remotely comparable to symlinks (NTFS junctions are not interpreted correctly by at least some environments), which means that anything which relies on symlinks in a *nix based system cannot be transferred. (I know that Windows 64x OS's have symlinks, but I've not seen if they can be read correctly by the environments previously mentioned) In addition to this, I was also wondering which is considered faster. Is this even possible to know? Do you have a conjecture? I would imagine that since symlinks are generally more low-level than Apache it would make sense that they would be referenced faster, but, on the other hand, I would guess that Apache is required to do a lookup in either case so it would be disk read dependent.

    Read the article

  • Ctrl key is broken on HP Envy is broken, where can I find a replacement

    - by NewProger
    I work as a developer so I have to use key combinations a lot. And I have HP Envy laptop. And the Ctrl key is broken for the second time. First time I just took one from my friend. But I don't have any more friends who are willing to sacrifice their Ctrl key Anyone know where I can find one? (or rather a bunch because they are so weak and low quality) I tried to contact HP support but they did everything to prevent people from doing it. And it is impossible to reach HP support. And in my case where warranty is expired it is not possible at all according to their rules. Also I tried Googling but found nothing

    Read the article

  • Shared FC LVM VG with LVs for each KVM VM. Clvm required?

    - by Cocoabean
    I have 2 virtual machine hosts running Ubuntu 12.04 and KVM managed with libvirt. They are both connected to the same VG which is a LUN on my SAN over FC. I provision LVs on this shared VG for each VM. I don't think I need HA or failover, but I do want live migration between the hosts. Do I need clvm in this case? As long as I don't try to start the same VM on each host should this work? Clvm requires lots of overhead with clustering tools that I don't think I need. I can deal with manually restarting VMs on other hosts in the event of a hardware failure.

    Read the article

  • Password manager solution: Symbian based phone and a Linux machine (Windows is not important, but wo

    - by Kent
    Hi, I currently use KeePassX to manage my passwords on my Linux (Xubuntu) machine. It's nice to have all the passwords encrypted, but sometimes I'd like to be able to tell a password when I'm on the run. Therefore I'm looking for a solution which I can synchronize with my phone. I have a Nokia N82 which is a Symbian OS v9.2 based phone for the S60 3rd Edition platform with Feature Pack 1. I like an open source solution if it's possible. In case it isn't I wouldn't mind paying for a good solution. If Windows may be added to the synchronization mix it's nice, but it's absolutely not a primary requirement (I don't even have any computer running Windows).

    Read the article

  • Using AdSense to show ads to logged-in users

    - by John
    I know that you can grant authorization permissions to Google AdSense so that it can 'log in' and see what other logged in users can see (e.g. in a private forum), so that the ads it displays are better targetted. Extending this principle further: I am making a site which will show completely different content for each individual user (i.e. not 'common' content like a forum in which everybody sees essentially the same thing). You could think of this content as similar to the way each Facebook user has a different news feed, but it is the 'same' page. Complicating things further, the URLs for this site will be simple, e.g. '/home' and '/somepage', and will not usually include unique identifiers to differentiate between users (e.g. '/home?user=32i42'). My questions are: Is creating an account purely for AdSense to log in to the site with worth it in this case, seeing as it will be seeing it's own 'personalized' version and not any other user's? More importantly: is that against the Google AdSense Terms of Service? (I can't seem to figure that one out) How would you go about this problem?

    Read the article

  • How to build a "traffic AI"?

    - by Lunikon
    A project I am working on right now features a lot of "traffic" in the sense of cars moving along roads, aircraft moving aroun an apron etc. As of now the available paths are precalculated, so nodes are generated automatically for crossings which themselves are interconnected by edges. When a character/agent spawns into the world it starts at some node and finds a path to a target node by means of a simply A* algorithm. The agent follows the path and ultimately reaches its destination. No problem so far. Now I need to enable the agents to avoid collisions and to handle complex traffic situations. Since I'm new to the field of AI I looked up several papers/articles on steering behavior but found them to be too low-level. My problem consists less of the actual collision avoidance (which is rather simple in this case because the agents follow strictly defined paths) but of situations like one agent leaving a dead-end while another one wants to enter exactly the same one. Or two agents meeting at a bottleneck which only allows one agent to pass at a time but both need to pass it (according to the optimal route found before) and they need to find a way to let the other one pass first. So basically the main aspect of the problem would be predicting traffic movement to avoid dead-locks. Difficult to describe, but I guess you get what I mean. Do you have any recommendations for me on where to start looking? Any papers, sample projects or similar things that could get me started? I appreciate your help!

    Read the article

  • Windows Azure and Server App Fabric &ndash; kinsmen or distant relatives?

    - by kaleidoscope
    Technorati Tags: tinu,windows azure,windows server,app fabric,caching windows azure If you are into Windows Azure then it would be rather demeaning to ask if you are aware of Windows Azure App Fabric. Just in case you are not - Windows Azure App Fabric provides a secure connectivity service by means of which developers can build distributed applications as well as services that work across network and organizational boundaries in the cloud. But some of you may have heard of another similar term floating around forums and blog posts - Windows Server App Fabric. The momentary déjà vu that you might have felt upon encountering it is not unheard of in the Cloud Computing circles - http://social.msdn.microsoft.com/Forums/en/netservices/thread/5ad4bf92-6afb-4ede-b4a8-6c2bcf8f2f3f http://forums.virtualizationtimes.com/session-state-management-using-windows-server-app-fabric Many have fallen prey to this ambiguous nomenclature but its not without a purpose. First announced at PDC 2009, Windows Server AppFabric is a set of application services focused on improving the speed, scale, and management of Web, Composite, and Enterprise applications. Initially codenamed Dublin the app fabric (oops....Windows Server App Fabric) provides add-ons like Monitoring,Tracking and Persistence into your hosted Workflow and Services without the Developer worried about these Functionalities. Alongwith this it also provides Distributed In-Memory caching features from Velocity caching. In short it is a healthy equivalent of Windows Azure App Fabric minus the cloud part. So why bring this up while talking about Windows Azure? Well, apart from their similar last names these powers are soon to be combined if Microsoft's roadmap is to be believed - "Together, Windows Server AppFabric and Windows Azure platform AppFabric provide a comprehensive set of services that help developers rapidly develop new applications spanning Windows Azure and Windows Server, and which also interoperate with other industry platforms such as Java, Ruby, and PHP." One of the most powerful features of the Windows Server App Fabric is its distributed caching mechanism which if appropriately leveraged with the Windows Azure App Fabric could very well mean a revolution in the Session Management techniques for the Azure platform. Well Microsoft, we do have our fingers crossed..... Read on... http://blogs.technet.com/windowsserver/archive/2010/03/01/windows-server-appfabric-beta-2-available.aspx

    Read the article

  • How to crop black bars or zoom on Youtube and other Video websites?

    - by cloneman
    Many desktop software (VLC, MPC) and have an option to 'zoom in' , 'crop black bars', or crop to a specific aspect ratio. How can we do this in Fullscreen on Youtube or other flash Video sites? I am the viewer, NOT the video's creator/publisher. iOS Can do this (double tap to zoom, which removes black bars, zoom depth not configurable). afaik, Desktop computers (and android devices), cannot do this on the fly. The only 'workaround' I've found is F11 and zooming of the entire web page - basically a fake full screen and zooming the web page beyond the screen size. Use Case: watching 4:3 flash videos from the web on a widescreen monitor. Looking for all creative solutions, including any software that accesses YouTube without using a web browser.

    Read the article

  • JavaScript loaded external content SEO

    - by user005569871
    I wonder what is the best way to have Javascript loaded content indexed by search engines. I know that search engines don't execute Javascript, but I am thinking more of an progressive enchantment. I am creating a responsive website, and on the home page I will have some sections about most visited products and recommended product that I plan to load depending on the device detected. These products will be in sliders with thumbnail images and names of the products. If mobile is detected slider content will not load, ant the link to the external page will be shown. I know that external content will be indexed via link to those resources. Where will the users be directed from search in this case? To the external page or home page? Will it be bad for SEO if I show only product names on front page so they can be indexed and hide them with CSS? What is the best way to index that content and possibly direct users from search to home page? Also, i've seen the Ajax crawling but iI would like not to use that if there is any better way.

    Read the article

  • prevent search engines indexing depending on domain

    - by Javier
    We have a dedicated server with a hosting company with a couple of dozens of webs in it. It happens that the nameservers (EG: ns1.domain.com, ns2.domain.com) ip's are coincident with some client webs, let's say webclient1.com and webclient2.com Problem is that for a certain searches in google, some results are showing up like ns1.domain.com/result instead of webclient1.com/result which is pretty wrong and annoying for our clients. Actually if you type in the browser ns1.domain.com or ns2.domain.com it will load some pageclients instead. Is there any way to prevent google to track those results only in case the robots are coming to check ns domains? It may be not correct to ask this as well, but why is it happening? is it a result of a bad server configuration? I'm pretty new on these matters, so thank you in advance for any help!

    Read the article

  • upgrading my computer system for office use

    - by denise ellul
    Presently I have the under-mentioned computer system. What should be changed and upgraded with the following products that I own presently? I am interested in performance issues related to cache memory, bus speed, RAM, CAS latency as well as other considerations. Thanks for your help. Processor (CPU): Intel Celeron Dual Core E3300 2.5 GHz Motherboard: Asus P5QPL-AM G41 Main Memory (RAM): 2 GB Team Elite DDR2 PC8000 Case: Coolermaster RC330 Power Supply Unit: 500W EZ-Cool Standard Storage Device (Hard Drive): 500GB Samsung Video Card: Intel GMA X4500 (On-board) Optical Drive: LG GH22NS50 Sound Card: AC 97 (On – board) Card Reader: Akasa Black TFT Monitor: 19” View Sonic Speakers: Logitech S120 2.0

    Read the article

  • How can I decrease relevancy of Creative Commons footer text? (In Google Webmaster Tools)

    - by anonymous coward
    I know that I may just have to link the image to make this happen, but I figured it was worth asking, just in case there's some other semantic markup or tips I could use... I have a site that uses the textual Creative Commons blurb in the footer. The markup is like so: <div class="footer"> <!-- snip --> <!-- Creative Commons License --> <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-sa/3.0/us/80x15.png" /></a><br />This work by <a xmlns:cc="http://creativecommons.org/ns#" href="http://www.xmemphisx.com/" property="cc:attributionName" rel="cc:attributionURL">xMEMPHISx.com</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/">Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License</a>. <!-- /Creative Commons License --> </div> Within Google Webmaster Tools, the list of relevant keywords is heavily saturated with the text from that blurb. For instance, 50% of my top-ten most relevant keywords (including the site name): [site name] license [keyword] commons creative [keyword] alike [keyword] attribution [keyword] I have not done any extensive testing to find out rather or not this list even matters, and so far this doesn't impact performance in any way. The site is well designed for humans, and it is as findable as it needs to be at the moment. But, out of mostly curiosity: Do you have any tips for decreasing the relevancy of the text from the Creative Commons footer blurb?

    Read the article

  • Keeping Xv Overlay configuration throughout an X session.

    - by kriss
    After upgrading my Linux system from Ubuntu 9.04 to Ubuntu 10.10, I suceeded correcting most problems (all related to Intel 82865G Integrated Graphics Adapter support and compiz is still not working but that's another matter) but for one I only have a partial solution. Whenever I play a video the colors are much too saturated. This is really a problem for tones of skins that appears reddish (everyone seems to be coming back from a ski vacation with deep sun burns). As this effect only occurs with videos, not with pictures, I finally figured out it was related to Video Overlays configuration and I can correct it typing: xvattr -a XV_SATURATION -v 120 This change the default saturation value, which is 500 and much too high in my case, at eye sight the correct value seems to be between 100 and 150. Now my problem is that I have to type the above command each time I run a video. If I type it before running the video it has no effect, if I close the video and open a new one, I have to type it again, etc. I tried to put it in Xsession and (logically) it has no effect either. How could I do to get the correct setting whenever I run a video without typing the above command every time ?

    Read the article

  • Fusion 3.1 and Parallels 6 for Win7 x64

    - by Ronnie
    I read in a recent article that Parallels Desktop 6 is faster almost everywhere than VMware Fusion. I was originally using Parallels 4 before passing to VMware due to the frequent Parallels crashes. As now I am using a lot Fusion on my Macbook on a big Win7 x64 virtualized development machine that I find too slow I am wondering if the announced speed up of Parallels V6 is justified to come back to it. As a test I converted my Fusion 3.1 to a trial of Parallels Desktop 6 and my Windows Experience Index passed from 4.7 of Fusion to 4.5 on Parallels 6 so apparently the virtualized machine is not seeing that speed benefit. Is there any optimization to set up on Parallels to increase the WEI or should I stay with Fusion (and in this case this kind of articles is just marketing stuff)?

    Read the article

  • GPG Invalid Signature

    - by user46421
    I am having problems with the following (in an attempt to remove hyperlinks, I have removed one of the "/" from the addresses): W: GPG error: http://archive.ubuntu.com oneiric Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> W: GPG error: http://ppa.launchpad.net oneiric Release: The following signatures were invalid: BADSIG B725097B3ACC3965 Launchpad lffl W: GPG error: http://ppa.launchpad.net oneiric Release: The following signatures were invalid: BADSIG 4874D3686E80C6B7 Launchpad PPA for Banshee Team W: GPG error: http://archive.getdeb.net jaunty-getdeb Release: The following signatures were invalid: BADSIG A8A515F046D7E7CF GetDeb Archive Automatic Signing Key <[email protected]> W: GPG error: http://badgerports.org lucid Release: The following signatures were invalid: BADSIG C90F9CB90E1FAD0C Jo Shields <[email protected]> W: GPG error: http://ppa.launchpad.net oneiric Release: The following signatures were invalid: BADSIG 976B5901365C5CA1 Launchpad PPA for transmissionbt W: Failed to fetch http://ppa.launchpad.net/dlecan/openjdk/ubuntu/dists/oneiric/main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/dlecan/openjdk/ubuntu/dists/oneiric/main/binary-i386/Packages 404 Not Found W: Failed to fetch http://ppa.launchpad.net/sevenmachines/flash/ubuntu/dists/oneiric/main/binary-i386/Packages 404 Not Found W: Failed to fetch http://ppa.launchpad.net/sun-java-community-team/sun-java6/ubuntu/dists/oneiric/main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/sun-java-community-team/sun-java6/ubuntu/dists/oneiric/main/binary-i386/Packages 404 Not Found I have tried the following solutions which were in a closed case titled "The following signatures were invalid": First of all try sudo apt-get clean sudo apt-get update && sudo apt-get upgrade Some ISPs cache the packages and errors like these are reported then. If the above commands don't work, try sudo apt-get update -o Acquire::http::No-Cache=True and again sudo apt-get update && sudo apt-get upgrade If it still doesn't work, sudo apt-get update -o Acquire::BrokenProxy=true sudo apt-get update && sudo apt-get upgrade

    Read the article

  • Must developers understand the business domain or should the specification be sufficient?

    - by Jerome C.
    I work for a company for which the domain is really difficult to understand because it is high technology in electronics, but this is applicable to any software development in a complex domain. The application that I work on displays a lot of information, charts, and metrics which are difficult to understand without experience in the domain. The developer uses a specification to describe what the software must do, such as specifing that a particular chart must display this kind of metrics and this metric is the following arithmetic formula. This way, the developer doesn't really understand the business and what/why he is doing this task. This can be OK if specification is really detailled but when it isn't or when the author has forgotten a use case, this is quite hard for the developer to find a solution. At the other hand, training every developer to all the business aspects can be very long and difficult. Should we give more importance to detailled specification (but as we know, perfect specification does not exist) or should we train all the developers to understand the business domain? EDIT: keep in mind in your answer that the company could used external developpers.

    Read the article

  • How do you record how much memory an app is using on OS X

    - by Ace Legend
    I'm on a Mac Mini with OS X 10.8.2. I am an app developer, but in this case am building an app in C++, so I can not use Xcode for this question. I would like to track how much memory my app is using, but I don't want to manually record it. How do I do this. MORE INFO: I want to record it all day long. I will have the app running all day, so that I can compare peaks in memory. I am not opposed to 3rd party apps, as long as they are reliable. Thanks.

    Read the article

  • Portal And Content – Introduction (1 of 7)

    - by Stefan Krantz
    The coming post over the next two months will be included in a new series. The idea is to help the reader to understand how to enable a versatile and manageable portal. Each post will go through a specific use case or lifecycle group of events that a Content Driven Portal requires the development team to consider. The current planning is to deliver following subjects, each topic will be enclosed in a separate blog post. Introduction – Introduction to the series of posts and what to expect at the end of the series Components, part 1 – UCM, Site Studio and high level introduction to content templates Components, part 2 – Page Templates and  Navigation model Components, part 3 – Applied Customization Framework for Content Presenter Taskflows Scenario 1 – Enable a Portal for runtime administration Scenario 2 – Enable a Portal for Internationalization Scenario 3 – Enable a Portal for Content Workflows Background This post series has been issued to help customers, partners and consultants to understand the concept of a WebCenter Portal project where the main focus or a majority of the portal has content interaction. Today the most portal installations Oracle WebCenter Portal is involved in have a vast majority of content based pages. Many of the Portal projects have or will run into challenges, to mitigate these challenges the portal and content lifecycle has to be well designed. The coming posts will address the main components that should be involved when creating such scenarios; it will also go into details on the process by describing three solution scenarios. The aim with the scenarios is to give the reader a more hands on understanding of the concept of building and architecting a Content Driven Portal. The selected scenarios are selected based on the most common use cases that we have identified until today.

    Read the article

  • How to install plug-in for Google Chrome

    - by Jeff
    Recently Google Chrome browser has been prompying me to install a plug-in everytime visit a web page. I always say Yes, install pulg-in, but that seems to have no effect. I tried following the "Trouble installing plug-ins" on the Chrome toolbar, but that seems to say Windows Media Player is the problem, but again, all my attempts at installing don't have any effect. As far as I know, I have not chnage anything, but Skype did recently upgrade itself. This is Windows 7 Professional 64-bit, and Chrome says it is up-to-date. I'm going to run a malware checker next, just in case - Thanks!

    Read the article

  • How to Shutdown PC by Pressing Power Button

    - by AgA
    I always Hibernate my PC. Sometimes when I boot, it does not recognize the mouse/keyboard or any USB devices. I've also setup it to go in sleep in 5 minutes. In that case I can't restart the PC so that USB starts working. When I press the Power button then it starts shutdown but asks confirmation twice one is for shutdown by force confirmation and then there is one more. When my USB is disabled I can't input these options. So I switch off the power. What I want is that upon pressing Power Button it should at once start shutdown without asking any more confirmations System details: Win-7 Home Premium 64 bit Intel i3 530 Asus P montherboard EDIT: It is Desktop PC

    Read the article

  • On RouterOS, how will transparent proxying (with DNAT) affect reporting of netflows?

    - by Tim
    I have a box running Mikrotik RouterOS, which is set up to do transparent web proxying, as described here. In short, this means that I have a firewall rule for destination NAT causing any port 80 traffic to get redirected to port 8080 on the router, which is received by the Mikrotik local web proxy. The local web proxy then makes the web request on the client's behalf, in this case to a parent web proxy server (which in turn does the real web request). My question is, how will this two-part process get reported in the logging of traffic flow information (netflows)? Looking at the logged information, what I seem to be seeing is this: One flow recorded from client machine (private IP) to remote proxy (8080) Another flow recorded from router to remote proxy (8080) The original request that the client made to port 80 isn't recorded. I want to write code to analyse traffic usage, so I want to be sure I'm not losing information if I discard the latter of these.

    Read the article

  • In Windows, a batch file with a recursive for loop and a file name including blanks

    - by uvts_cvs
    Hello, I have a folder tree, like this (it's only an example, it will be deeper in my real case): C:\test | +---folder1 | foo bar.txt | foobar.txt | +---folder2 | foo bar.txt | foobar.txt | \---folder3 foo bar.txt foobar.txt My files have one or more spaces in the name and I need to perform a command on them, so I am interested in foo bar.txt but not in foobar.txt. I tried (inside a batch file): for /r test %%f in (foo bar.txt) do if exist %%f echo %%f where the command is the simple echo. It does not work because the space is skipped and I get no output. This works but it is not what I need: for /r test %%f in (foobar.txt) do if exist %%f echo %%f It prints: C:\test\folder1\foobar.txt C:\test\folder2\foobar.txt C:\test\folder3\foobar.txt I tried using the quotation mark (") but it does not work: for /r test %%f in ("foo bar.txt") do if exist %%f echo %%f It does not work because the quotation mark is still included in the output: C:\test\folder1\"foo bar.txt" C:\test\folder2\"foo bar.txt" C:\test\folder3\"foo bar.txt"

    Read the article

  • An alternative to multiple inheritance when creating an abstraction layer?

    - by sebf
    In my project I am creating an abstraction layer for some APIs. The purpose of the layer is to make multi-platform easier, and also to simplify the APIs to the feature set that I need while also providing some functionality, the implementation of which will be unique to each platform. At the moment, I have implemented it by defining and abstract class, which has methods which creates objects that implement interfaces. The abstract class and these interfaces define the capabilities of my abstraction layer. The implementation of these in my layer should of course be arbitrary from the POV view of my application, but I have done it, for my first API, by creating chains of subclasses which add more specific functionality as the features of the APIs they expose become less generic. An example would probably demonstrate this better: //The interface as seen by the application interface IGenericResource { byte[] GetSomeData(); } interface ISpecificResourceOne : IGenericResource { int SomePropertyOfResourceOne {get;} } interface ISpecificResourceTwo : IGenericResource { string SomePropertyOfResourceTwo {get;} } public abstract class MyLayer { ISpecificResourceOne CreateResourceOne(); ISpecificResourceTwo CreateResourceTwo(); void UseResourceOne(ISpecificResourceOne one); void UseResourceTwo(ISpecificResourceTwo two); } //The layer as created in my library public class LowLevelResource : IGenericResource { byte[] GetSomeData() {} } public class ResourceOne : LowLevelResource, ISpecificResourceOne { int SomePropertyOfResourceOne {get{}} } public class ResourceTwo : ResourceOne, ISpecificResourceTwo { string SomePropertyOfResourceTwo {get {}} } public partial class Implementation : MyLayer { override UseResourceOne(ISpecificResourceOne one) { DoStuff((ResourceOne)one); } } As can be seen, I am essentially trying to have two inheritance chains on the same object, but of course I can't do this so I simulate the second version with interfaces. The thing is though, I don't like using interfaces for this; it seems wrong, in my mind an interface defines a contract, any class that implements that interface should be able to be used where that interface is used but here that is clearly not the case because the interfaces are being used to allow an object from the layer to masquerade as something else, without the application needing to have access to its definition. What technique would allow me to define a comprehensive, intuitive collection of objects for an abstraction layer, while their implementation remains independent? (Language is C#)

    Read the article

  • What calls trigger a new batch?

    - by sebf
    I am finding my project is starting to show performance degradation and I need to optimize it. The answer to my previous question and this presentation from NVidia have helped greatly in understanding the performance characteristics of code using the GPU but there are a couple of things that aren't clear that I need to know to optimize my drawing. Specifically, what calls make the distinction between batches. I know that any state changes cause a new batch, so that includes: Render State Changes Buffer Changes Shader Changes Render Target Changes Correct? What else counts as a 'state change'? Does each Draw**Primitive() call constitute a new batch? Even if I were to issue the same call twice, with no state changes, or call it once on on part of the buffer, then again on another? If I were to update a buffer, but not change the bindings, would that be a new batch? That presentation and a DX9 page suggest using all of the texture slots available, which I take to mean loading multiple objects in 'parallel' by mapping their buffers/shaders/textures to slots 1-16. But I am not sure how this works - surely to do this you would need to change the buffer binding and that would count as a state change? (or is it a case of you do but it saves 16 calls so its OK?)

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >