Search Results

Search found 21028 results on 842 pages for 'single player'.

Page 515/842 | < Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >

  • How to stream multiple files on demand in VLC?

    - by romkyns
    Is there any way at all that I can set up VLC on a server PC in such a way that I can access a list of all my videos from another PC, and pick one to be streamed on demand? I've been pointed at this streaming guide (pdf), but it's pretty useless. For a start, most of the menus in those screenshots don't match the actual current version VLC, and then it sort of assumes you already know what you're doing. So far I managed to figure out how to stream a single file, which I must choose before watching on the server PC - pretty useless if you ask me! The impenetrable "UI" doesn't help either... (P.S. The reason I'm going for streaming rather than the very simple to set up network drive is described in this question)

    Read the article

  • What is the canonical approach to using a VCS right from a project's infancy?

    - by Anonymous -
    Background I've used VCS (mainly git) in the past to manage many existing projects and it works great. Typically with an existing project, I would check in each change I make to the code that either optimizes or changes the overall functionality (you know what I mean, in suitable steps, not every single line I change). Problem One thing I've not had so much practise at is creating new projects. I'm in the process of starting a new project of my own that will probably grow quite large, but I'm finding that there is a lot to do and a lot changing in the first few days/hours/weeks/the period up until the product is actually functioning in it's most basic form. Is there any point in me checking in each step of the process as I would with an existing project? I'm not breaking the project with changes I make since it isn't working yet. At the moment I've simply been using VCS as a backup at the end of each day, when I leave the computer. My first few commits were things like "Basic directory structure in place" and "DB tables created". How should I use a VCS when starting a new project?

    Read the article

  • Can anyone point me to some open source directX rendering engines or frameworks? [on hold]

    - by Jim
    I'm completely new to graphics API programmming, but not at all new to the theory and principle operation of game engines and rendering engines. That being said, I want to do some experiments of rendering very dense geometry scenes in a basic rendering engine or game engine. I don't need a lot of bells and whistles. What I need is enough control that I can implement my own scene graph algorithms and control the rendering pipeline very specifically. My ideal candidate engine would be either a rendering engine or game engine with a modular design that might be ready to go out of the box but would be simple enough in case I need to rip out some of the guts in the rendering management and implement my own. It's a tough call because I'm right at the level where it's almost better to go from scratch, but there's no sense in having to build every single basic thing such as heirarchical transforms, etc. I just want to work with rendering optimization to push dense geometry for maximum FPS. Does anyone have a suggestion for an engine or basic framework to use? I requested DirectX in my title because I figured it would likely be better supported and less likely for me to run into some obscure less-documented problem. But OpenGL might be acceptable if the recommended framework was definitely better than my other options. EDIT: I should add that I really want GPU tessellation support (part of adding to the density of geometry detail).

    Read the article

  • Display CPU usage separately (without root privileges)

    - by synaptik
    I need to display the CPU usage for each processing core on a single shared-memory 12-core (SMP) machine. I don't have access to install htop, else I would simply use that. I don't need fancy graphs or meters, though they would be nice. For example, simply displaying: X X X X X X X X X X X X where each X is the percentage utilization of 1 of the 12 processing cores on my machine. FYI: I know I can simply look at the utilization in "top" and divide that number by the number of cores on my machine, but I prefer a solution that shows each core separately.

    Read the article

  • I know this is a stupid question but... How many websites can my server potentially hold?

    - by Daniel Kindler
    Sorry for the "noob" question, but... About how many medium-sized websites with average traffic could this server hold? Just like the average website, kind of like a small business site. How many sites could this server hold, but still maintain nice, decent speed? PowerEdge R510 PE R510 Chassis for Up to Four 3.5" Cabled Hard Drives, LED edit Processor Intel® Xeon® E5630 2.53Ghz, 12M Cache,Turbo, HT, 1066MHz Max Mem edit Memory 8GB Memory (4x2GB), 1333MHz Single Ranked UDIMMs for 1 Procs, Optimized edit Operating System SUSE Linux Enterprise Server 10, SP3, Up To 32 CPU Lic, 1 YR Sub, DIB, Media edit Red Hat Enterprise Linux Licensing Hard Drives 250GB 7.2K RPM SATA 3.5" Cabled Hard Drive edit Hard Drives 1TB 7.2K RPM SATA 3.5" Cabled Hard Drive edit Hard Drives 2 X 2TB 7.2K RPM SATA 3.5in Cabled Hard Drive Hard Drive Configuration No RAID, Embedded SATA Controller for x4 Chassis edit Power Supply 480 Watt Non-Redundant Power Supply edit Thank you!

    Read the article

  • What are solutions and tradeoffs to maintain search result consistency in a web application

    - by iammichael
    Consider a web application with a custom search function that must display the results in a paged manner (twenty per page with up to hundreds of thousands of total results) and the ability to drill down to individual results that maintain next/previous links to navigate through the results. Re-executing the search on each page request to get the appropriate results for that page of data can be too expensive (up to 15s per search). Also, since the underlying data can change frequently (e.g. addition of new results), re-executing could cause the next/previous functionality to result in inconsistent behavior (e.g. the same results reappearing on a later page after having been viewed on an earlier page). What options exist to ensure the search results can be viewed across multiple pages in a consistent manner, and what tradeoffs does each option have in terms of network, CPU, memory, and storage requirements? EDIT: I thought caching the query search results was an obvious necessity. The question is really asking about where to cache the result set and what tradeoffs might exist to each. For example, storing the ids of the entities in the result set on the client, or storing the IDs of the entities themselves in the users session on the web server, or in a temporary table in the database. I'm not looking specifically for a single solution as different scenarios may result in different approaches (and such a question would be more suited for stackoverflow.com rather than here), but more of a design comparison between the possible approaches.

    Read the article

  • How to Increase Memory Allocated to IIS .NET Application?

    - by Mark Hansen
    We are using Windows 2008 R2 and IIS 7 running on Amazon EC2. IIS is running a single .NET application written in C#. We are having performance issues and I want to give the application more memory, but I cannot figure out how to do it. How do I control the amount of memory that the CLR gets? I'm a total newbie with IIS, .NET and the CLR. If I were working with Java, I would just use the -Xmx flag to increase the memory available to the JVM (e.g., -Xmx3000m for 3GB). But, I cannot seem to figure out how to do this in the Windows world.

    Read the article

  • What kind of permission is this? (Groups+Roles)

    - by Jorge
    I'm starting to need an access control for roles in my app. I don't know much of this, but I understand how vBulletin works: I create groups, then give permissions to groups. I think that what I need is the Role Bases Access Control (RBAC) , but i'm not sure, because I need groups to give permissions instead of single users (Maybe it's not that complicated to achieve). Example of what I'm thinking: Given a post: Editor's Group has permission to view it before it's published. Editor's Group has permission to edit its content. Public Group (Default) has not permission to view it before it's published. Admin Group has permission to delete the post. So basically I wan't orientation about if RBAC is what I need. And also, how would it be good to store group membership in a user, for example, would be good to have: ID NAME PASSWORD GROUPS (1, MyName, MyPassword, 1/2/3/4/5) and explode it via PHP or one registry for every Group membership in a table named permissions, example: USERID, USERGROUP values (1, 1), (1, 2) Maybe should be the second way because of the formal norms but I didn't study yet Databases 1 at college.

    Read the article

  • Windows and domain suffix addition

    - by grawity
    I have a DNS domain and host it on my own server. My desktop PC (Windows XP) is configured to have mydomain.tld as its primary DNS suffix. Now, when the system tries to resolve any domain - stackoverflow.com, for example - it tries with the suffix added first, even if the name has periods in it. In other words, it tries stackoverflow.com.mydomain.tld. before stackoverflow.com.. Is this valid according to DNS standards and common sense? Is there anything I can do to prevent it, other than removing the prefix completely? (I still want it to be appended to single-component hostnames. Currently I have two prefixes . and mydomain.tld. configured, but it isn't very fast when resolving foohost.)

    Read the article

  • Internet connection issues after installing Windows Phone 8 SDK

    - by Mosquito
    first of all I must admit, that I'm not good in all this network stuff. I am using Windows 8 OS. On my laptop (Lenovo G570) I have installed Windows Phone 8 SDK and shortly after this I started having weird issues with internet connection. When I start my laptop, internet usually works fine, but after a few minutes it starts slowing down so much, that I'm not able to open a single page. Rebooting doesn't work, after several disabling and enabling network adapter, it usually works again for a few minutes and then again it stops. I'm sure it has something to do with Windows Phone 8 SDK, because problems started with this. With SDK there was also installed "vEthernet (Internal Ethernet Port Windows Phone Emulator Internal Switch)" network adapter. It is worth to note that problems occur mostly in my school network, not at home. Both at home and school I am using Wi-Fi connection. I hope the information given are enough to help me. Thanks in advance for any answers!

    Read the article

  • How do I completely turn off Excel 2010 autoformatting?

    - by Samuel
    I am using a lot of csv files at work with excel 2010. These have no formatting so Excel 2010 autoformats all the cells. I've found workarounds but the ones I have found require action for each file or each cell (i.e. adding a single quote). My current workaround is using the "show formulas" option under formula auditing in the formulas tab. This seems to show the raw data (since they are just csv files there aren't formulas). If I could just keep this active so I don't have to turn it on.

    Read the article

  • Using socat to exec php cli

    - by RoyHB
    There are multiple client programs that periodically connect to a port on my server and send a single line of text. When a connection to the port is made I need to start a PHP CLI script that processes the data. There may be many of the remote scripts running/connecting at more or less the same time so I think it would be best if socat forked a process for each connection to run the script. I've gotten socat to do most of what I need, using the command socat tcp-l:myport,fork exec:mypath/socatTest.php I can read the input on php://stdIn. All is good. The problem is that the process doesn't seem to fork, so if a second external program sends data while another is doing the same it gets a connection refused error. Where have I gone wrong?

    Read the article

  • New whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • New Whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • After installing Windows 8, boot hangs before BIOS setup

    - by Joe Purvis
    I have an Alienware M15x, 940MX processor, 16 GB RAM, 512GB M4 SSD, and I was running Win7 64bit. I backed up the disk and ran Win 8 setup. Setup appeared to go well through multiple reboots. After the last reboot, it simply stopped after POST at "Press F2 for bios setup". I have tried powering off, reducing RAM to one stick, removing MOS battery. Now, it gets to the screen with "Press F2 for setup" and "Press F12 for boot options". If I press F2, I get a single beep only. If I press F12, nothing. I cannot get into the BIOS to change boot options to boot from another disk and restore. I do have the latest BIOS. I am going to try replacing the CMOS battery, but I don't think that is likely to help. The computer has been fast and very reliable until now.

    Read the article

  • DNS record reappears after having been deleted

    - by palmbardier
    I've got a Microsoft Windows Server 2003 R2 acting as a domain controller for a small network. It provides DHCP and DNS among a few other services. It's only got a single NIC, but it's configured with two IP addresses. I want it's name to resolve to one of the two IP addresses assigned to its NIC. I've unchecked "Register this connection's addresses in DNS" under the "Advanced TCP/IP Settings". Currently we've got two distinct DNS Host (A) records for this domain controller: dc-001 - 10.0.0.1 dc-001 - 10.0.100.1 I've deleted the first entry but it continues to reappear in my dnsmgmt snap-in. Unfortunately I'm not a Microsoft systems administrator by trade. Does anyone familiar with Microsoft server environments know why a deleted Host (A) record would reappear? Is there another check-box I need to toggle? Thanks in advance to all of you Microsoft experts out there.

    Read the article

  • iMac with Mountain Lion and Mavericks

    - by bob
    I have been starting up my iMac (Mavericks) with a flash drive that boots up Mountain Lion so that I can use an app that is incompatible with Mavericks. Question: I have a single external backup drive always connected to my Mac. I am thinking of partitioning that drive and giving each partition a unique name, e.g., Backup 1 and Backup 2. I intend then to boot up in Mavericks and tell Time Machine to back up to Backup 1, and then boot up in Mountain Lion and tell Time Machine to back up to Backup 2. Will this work, i.e., will the appropriate partition automatically back up the system in which it is booted?

    Read the article

  • Getting an object from a 2d array inside of a class

    - by user36324
    I am have a class file that contains two classes, platform and platforms. platform holds the single platform information, and platforms has an 2d array of platforms. Im trying to render all of them in a for loop but it is not working. If you could kindly help me i would greatly appreciate. void Platforms::setUp() { for(int x = 0; x < tilesW; x++){ for(int y = 0; y < tilesH; y++){ Platform tempPlat(x,y,true,renderer,filename,tileSize/scaleW,tileSize/scaleH); platArray[x][y] = tempPlat; } } } void Platforms::show() { for(int x = 0; x < tilesW; x++){ for(int y = 0; y < tilesH; y++){ platArray[x][y].show(renderer,scaleW,scaleH); } } }

    Read the article

  • How to optimize process of outlook files (*msg) conversion to .pdf?

    - by Lilly
    The aim is to convert several messages from Microsoft Outlook (2003 and/or 2007 versions) to .pdf files. Condition: One message should generate a corresponding single pdf file. If possible, pdf file should be named with date format YYYY-MM-DD (e.g. 2011-02-16.pdf). The current process, limited by softwares such as CutePDF, requires the conversion performed one message at a time. I'm looking for a solution that allows the conversion of several messages at once, but under the condition abovementioned (mainly: one message = one pdf file).

    Read the article

  • Best practice- handling images on website

    - by Steve
    I am porting an old eCommerce site to MVC 3 and would like to take advantage of design improvements. The site currently has product images stored in 3 sizes: thumbnail, medium (for display in a list) and expanded for a zoomed look. Right now we are having to upload 3 separate images that are sized exactly right, provide 3 different names that match what the site expects, etc., it is a pain. I'd like to upload just 1 file, the large one, then let the site reduce it to needed sizes, and I'd like the flexibility to change the thumbnail and list sizes depending on user preferences, form factor (e.g. mobile, iPad, desktop), etc. so might need many copies of the same image. My question is should the image be reduced then saved several times upon upload and if so what is a good storage/naming convention? The other idea is to store just the single image but resize it programmatically before serving it to the client. Has anybody done this and what are the tradeoffs besides a few more machine cycles? How do you pass a temporary image in memory to the client (there is no URL)?

    Read the article

  • How can I too many files upload more fast way to Cloud files in Rasckspace?

    - by andy kim
    I have a lot of image files, it's all I want to upload to RackSpace cloud files about a million in a single directory the fastest and most efficient way. but I'm use uploading python-cloudfiles script is very slow and I want to know different ways or python script code. because one by one connection upload is very slow. I think one files tar and uncompress directory is better way. but cloudfiles do not support this way. Who know any other way?

    Read the article

  • Visual Studio 2013 now available!

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/10/17/visual-studio-2013-now-available.aspxVisual Studio 2013 is now available for download! I will attach the beginning of their web page announcement. You should note that web projects may now be readily a combination of Web Forms, MVC and Web API.We are excited to announce that Visual Studio 2013 is now available to you as an MSDN subscriber! For developers and development teams, Visual Studio 2013 easily delivers applications across all Microsoft devices, cloud, desktop, server and game console platforms by providing a consistent development experience, hybrid collaboration options, and state-of-the-art tools, services, and resources. Below are just a few of the highlights in this release:   •   Innovative features for greater developer productivity:Visual Studio 2013 includes many user interface improvements; there are more than 400 modified icons with greater differentiation and increased use of color, a redesigned Start page, and other design changes.  •   Support for Windows 8.1 app development: Visual Studio 2013 provides the ideal toolset for building modern applications that leverage the next wave in Windows platform innovation (Windows 8.1), while supporting devices and services across all Microsoft platforms. Support for Windows Store app development in Windows 8.1 includes updates to the tools, controls and templates, new Coded UI test support for XAML apps, UI Responsiveness Analyzer and Energy Consumption profiler for XAML & HTML apps, enhanced memory profiling tools for HTML apps, and improved integration with the Windows Store.  •   Web development advances: Creating websites or services on the Microsoft platform provides you with many options, including ASP.NET WebForms, ASP.NET MVC, WCF or Web API services, and more. Previously, working with each of these approaches meant working with separate project types and tooling isolated to that project’s capabilities. The One ASP.NET vision unifies your web project experience in Visual Studio 2013 so that you can create ASP.NET web applications using your preference of ASP.NET component frameworks in a single project. Now you can mix and match the right tools for the job within your web projects, giving you increased flexibility and productivity.

    Read the article

  • Game Asset Management

    - by user964123
    I am making my first small mobile game in C# XNA. Lets say I have 3 screens, the main menu, options and game screen. A single game session usually lasts for 1 min, so the user will alternate frequently between the main menu and game screen. Therefore, once I load the textures for either screen, I want to keep them in memory to avoid frequent reloading. Both screens share some assets like their background textures, but differ in others. The first solution I came up with is making 2 texture factory classes, MainScreenAssetFactory and GameScreenAssetFactory, each with their own content manager, and ill store them in a globally accessible point so that they persist after either screen is destroyed. There is also a OptionsScreenAssetFactory, but that I dont want to cache it since the options screen is rarely visited. A typical Factory would look something like this public class MainScreenAssetFactory { private readonly ContentManager contentManager; public MainScreenAssetFactory(IServiceProvider serviceProvider, string rootDirectory) { contentManager = new ContentManager(serviceProvider) { RootDirectory = rootDirectory }; } public Texture2D ListElementBackground { get { return return contentManager.Load<Texture2D>("UserTab"); } } public Texture2D ListElementBulletPoint { get { return return contentManager.Load<Texture2D>("TabIcon"); } } public Texture2D LoggedOutUser { get { return return contentManager.Load<Texture2D>("LoggedOutUser"); } } } Since both Main, Options and Game Screen share some common resources, instead of loading them more than once, I created another class CommonAssetTexFactory which holds the common stuff and stays in-memory during the app lifetime. For example, this class gets passed to the options screen when it is created. However, given my small game with its few assets, I am already finding this solution cumbersome and inflexible. Changing anything would require looking to see if its already in the common factory, and if not, modifying existing factories and so on. And this is just considering textures currently, i didnt add sound files yet. I cant imagine bigger games with thousands of resources using this approach. A better idea must exist. Would someone please enlighten me?

    Read the article

  • Bringing the xenbr0 interface up on XEN under Ubuntu 8.04

    - by iyl
    I installed XEN on Ubuntu 8.04 using this tutorial: http://www.howtoforge.com/ubuntu-8.04-server-install-xen-from-ubuntu-repositories but after I reboot with the XEN kernel, I don't have xenbr0 device. I see that network-bridge script runs and it creates peth0 device, but not xenbr0. I have a very basic IP setup, with a single static IP defined in /etc/network/interfaces. The only unusual thing is that my hosting (1&1) gave me a netmask 255.255.255.255, so I had to add the default gateway with this script: /sbin/route add -host 10.255.255.1 dev eth0 /sbin/route add default gw 10.255.255.1 Everything else is plain vanilla Ubuntu 8.04.

    Read the article

  • Introducing RedPatch

    - by timhill
    The Ksplice team is happy to announce the public availability of one of our git repositories, RedPatch. RedPatch contains the source for all of the changes Red Hat makes to their kernel, one commit per fix and we've published it on oss.oracle.com/git. With RedPatch, you can access the broken-out patches using git, browse them online via gitweb, and freely redistribute the source under the terms of the GPL. This is the same policy we provide for Oracle Linux and the Unbreakable Enterprise Kernel (UEK). Users can freely access the source, view the commit logs and easily identify the changes that are relevant to their environments. To understand why we've created this project we'll need a little history. In early 2011, Red Hat changed how they released their kernel source, going from a tarball that had individual patch files to shipping the kernel source as one giant tarball with a single patch for all Red Hat-introduced changes. For most people who work in the kernel this is merely an inconvenience; driver developers and other out-of-kernel module developers can see the end result to make sure their module still performs as expected. For Ksplice, we build individual updates for each change and rely on source patches that are broken-out, not a giant tarball. Otherwise, we wouldn’t be able to take the right patches to create individual updates for each fix, and to skip over the noise — like a change that speeds up bootup — which is unnecessary for an already-running system. We’ve been taking the monolithic Red Hat patch tarball and breaking it into smaller commits internally ever since they introduced this change. At Oracle, we feel everyone in the Linux community can benefit from the work we already do to get our jobs done, so now we’re sharing these broken-out patches publicly. In addition to RedPatch, the complete source code for Oracle Linux and the Oracle Unbreakable Enterprise Kernel (UEK) is available from both ULN and our public yum server, including all security errata. Check out RedPatch and subscribe to [email protected] for discussion about the project. Also, drop us a line and let us know how you're using RedPatch!

    Read the article

< Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >