Search Results

Search found 20846 results on 834 pages for 'current dir'.

Page 148/834 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • How to see whats taking up space on your hard drive

    - by sam
    Im thinking of switching to one of the macbook pro retinas with a 256gb ssd, im making the move from a 512gb hd, which at the moment is split into a 60bg windows partition (which i dont think im going to have on my new machine) and a 440gb main partition of the main partition ive got 140gb free. So all in if i disregard the 60gb windows part ive got about 200gb free so im using 300gb, which is still a bit to much, ive had this machine 4/5 yrs so its likely to be clogged with files that i no longer need, is there a tool that i can use to view my current hd and see whats taking up the most room, so i can begin to see where i can cut down ? just for a bit of background my current machine runs osx.

    Read the article

  • How can I improve my isometric tile-picking algorithm?

    - by Cypher
    I've spent the last few days researching isometric tile-picking algorithms (converting screen-coordinates to tile-coordinates), and have obviously found a lot of the math beyond my grasp. I have come fairly close and what I have is workable, but I would like to improve on this algorithm as it's a little off and seems to pick down and to the right of the mouse pointer. I've uploaded a video to help visualize the current implementation: http://youtu.be/EqwWcq1zuaM My isometric rendering algorithm is based on what is found at this stackoverflow question's answer, with the exception that my x and y axis' are inverted (x increased down-right, while y increased up-right). Here is where I am converting from screen to tiles: // these next few lines convert the mouse pointer position from screen // coordinates to tile-grid coordinates. cameraOffset captures the current // mouse location and takes into consideration the camera's position on screen. System.Drawing.Point cameraOffset = new System.Drawing.Point( 0, 0 ); cameraOffset.X = mouseLocation.X + (int)camera.Left; cameraOffset.Y = ( mouseLocation.Y + (int)camera.Top ); // the camera-aware mouse coordinates are then further converted in an attempt // to select only the "tile" portion of the grid tiles, instead of the entire // rectangle. this algorithm gets close, but could use improvement. mouseTileLocation.X = ( cameraOffset.X + 2 * cameraOffset.Y ) / Global.TileWidth; mouseTileLocation.Y = -( ( 2 * cameraOffset.Y - cameraOffset.X ) / Global.TileWidth ); Things to make note of: mouseLocation is a System.Drawing.Point that represents the screen coordinates of the mouse pointer. cameraOffset is the screen position of the mouse pointer that includes the position of the game camera. mouseTileLocation is a System.Drawing.Point that is supposed to represent the tile coordinates of the mouse pointer. If you check out the above link to youtube, you'll notice that the picking algorithm is off a bit. How can I improve on this?

    Read the article

  • Per-vertex animation with VBOs: Stream each frame or use index offset per frame?

    - by charstar
    Scenario Meshes are animated using either skeletons (skinned animation) or some form of morph targets (i.e. per-vertex key frames). However, in either case, the animations are known in full at load-time, that is, there is no physics, IK solving, or any other form of in-game pose solving. The number of character actions (animations) will be limited but rich (hand-animated). There may be multiple characters using a each mesh and its animations simultaneously in-game (they will be at different poses/keyframes at the same time). Assume color and texture coordinate buffers are static. Goal To leverage the richness of well vetted animation tools such as Blender to do the heavy lifting for a small but rich set of animations. I am aware of additive pose blending like that from Naughty Dog and similar techniques but I would prefer to expend a little RAM/VRAM to avoid implementing a thesis-ready pose solver. I would also like to avoid implementing a key-frame + interpolation curve solver (reinventing Blender vertex groups and IPOs). Current Considerations Much like a non-shader-powered pose solver, create a VBO for each character and copy vertex and normal data to each VBO on each frame (VBO in STREAMING). Create one VBO for each animation where each frame (interleaved vertex and normal data) is concatenated onto the VBO. Then each character simply has a buffer pointer offset based on its current animation frame (e.g. pointer offset = (numVertices+numNormals)*frameNumber). (VBO in STATIC) Known Trade-Offs In 1 above: Each VBO would be small but there would be many VBOs and therefore lots of buffer binding and vertex copying each frame. Both client and pipeline intensive. In 2 above: There would be few VBOs therefore insignificant buffer binding and no vertex data getting jammed down the pipe each frame, but each VBO would be quite large. Are there any pitfalls to number 2 (aside from finite memory)? Are there other methods that I am missing?

    Read the article

  • newbie hard drive upgrade question

    - by musoNic80
    I have an Acer Aspire 3500WLMi laptop. It currently has a 40gb hard drive which I would like to upgrade. Could someone talk me through the process? I've listed my concerns/queries... Can I buy and install any 2.5" SATA or IDE hard drive into this machine? Should I buy somesort of USB caddy and clone my existing drive onto the new one via USB then physically swap the drives over? My current disk is partitioned to include a small amount of space for a Ubuntu install. Will a clone keep the current partition sizes or is it best for me to repartition once I've cloned? Many thanks.

    Read the article

  • How do you structure your shared code so that it is "re-findable" for new developers?

    - by awmckinley
    I started working at my current job about 8 months ago, and its been one of the best experiences I've had as a young programmer. It's a small company, and both my co-developers are brilliant guys. One of the practices that they both have been encouraging is lots of code-reuse. Our code base is mainly C#, and we're using a centralized revision control system. The way the repository is currently structured, there is a single folder in which all shared class libraries are placed (along with unit tests for each library), and our revision control system allows for sharing or linking those libraries out to other projects. What I'm trying to understand at this point is how the current structure of the folder can be made more conducive for finding those libraries again. I've talked to the other developers about this, and they agree that it's gotten a little messy. I find that I am sometimes "reinventing the wheel" because I didn't realize that there was an existing piece of code that solved a particular problem. The issue is complicated further by the fact that we're sharing some code between ASP.NET MVC2, WinForms, and Windows CE projects, and sharing code between applications built against multiple versions of .NET. How do other people approach this? Is the answer in naming the libraries in a certain way or is it preferable to invest in some code-search software? Is the answer in doc comments? Should we be sharing libraries at all or should we simply branch the class libraries for re-use? Thanks for any and all help!

    Read the article

  • Best Guest OS for running freeswitch under Proxmox

    - by Frank Waller
    We have a lot of asterisk Systems running on dedicated machines and would like to use freeswitch to replace a number of them. One of the advantages of freeswitch is supposed to be that it is doing much better in a virtualized environment than asterisk is. However I can find very little information about people using it in Proxmox containers. I would like to know if anyone has seen any ready to run proxmox images that include freeswitch so we can test a number of things without having to deeply go into creating our own. Or at least a clue to which system images/distro images we can use to quickly get it installed not having to deal with too many dependency or different Linux version issues. Just to be clear: It should be for a more or less current Proxmox and current freeswitch. I am not looking forward to use the KVM mode but would consider it, if it is otherwise ready to run out of the box. I would rather use a real OpenVZ based container. Thanks for anyone helping!

    Read the article

  • Date header returned by IIS7 is wrong

    - by James Hollingworth
    I am serving an ASP.NET application from IIS 7 but we are experiencing some weird cookie issues. The code works fine in other environments so we are assuming this is specific to this server (related question). We have been looking at the http headers returned and someone pointed out that the date http header is showing the 1st of Jan rather than today's date (so far it always shows that date regardless of what the current date is). The system clock is set correctly (and we can print out the current time/date via DateTime.Now correctly as well) so we can't work out why it's now working. Does anyone have any ideas? Is this a red-herring? Thanks, James

    Read the article

  • TODO Formatting

    - by charlie.mott
    Article Source: http://geekswithblogs.net/charliemott TODO's should only be used for a short period of time to remind you that something needs to be done. They should then be addressed as soon as possible. In order to know who owns a TODO task and how long it’s been outstanding, my company uses the following standard for TODO formatting: Format:     // TODO : Owner Initials – Date Created – Description of task. Sample:     // TODO: CM – 2012/01/20 – Move this class to a new location so it can be reused. Using this pattern makes it easy to use the Resharper TODO explorer. The Carrot In order to make it easy for developers to apply this rule, a code snippet can be created in Visual Studio. Even better, I created a Resharper template. This gives the facility to use the current user name and current date macros. image This actually makes the formatting look like this. Sample:     // TODO: cmott – 2012/01/20 – Move this class to a new location so it can be reused. The Stick How to you enforce such a rule? I tried to create a custom Resharper Highlighting Pattern to perform custom code analysis inspection for deviations from this pattern. However, I did not have any success. The find dialog would not accept // text. If I work it out, I will update this blog post. StyleCop Instead I created a custom StyleCop rule. I followed the approach used with the StyleCop Contrib project. This provides a simple to use base class and easy to use unit testing framework. I will upload this todo format analyzer as a patch to that project. image

    Read the article

  • How to document requirements for an API systematically?

    - by Heinrich
    I am currently working on a project, where I have to analyze the requirements of two given IT systems, that use cloud computing, for a Cloud API. In other words, I have to analyze what requirements these systems have for a Cloud API, such that they would be able to switch it, while being able to accomplish their current goals. Let me give you an example for some informal requirements of Project A: When starting virtual machines in the cloud through the API, it must be possible to specify the memory size, CPU type, operating system and a SSH key for the root user. It must be possible to monitor the inbound and outbound network traffic per hour per virtual machine. The API must support the assignment of public IPs to a virtual machine and the retrieval of the public IPs. ... In a later stage of the project I will analyze some Cloud Computing standards that standardize cloud APIs to find out where possible shortcomings in the current standards are. A finding could and will probably be, that a certain standard does not support monitoring resource usage and thus is not currently usable. I am currently trying to find a way to systematically write down and classify my requirements. I feel that the way I currently have them written down (like the three points above) is too informal. I have read in a couple of requirements enineering and software architecture books, but they all focus too much on details and implementation. I do really only care about the functionalities provided through the API/interface and I don't think UML diagrams etc. are the right choice for me. I think currently the requirements that I collected can be described as user stories, but is that already enough for a sophisticated requirements analysis? Probably I should go "one level deeper" ... Any advice/learning resources for me?

    Read the article

  • Finding header files

    - by rwallace
    A C or C++ compiler looks for header files using a strict set of rules: relative to the directory of the including file (if "" was used), then along the specified and default include paths, fail if still not found. An ancillary tool such as a code analyzer (which I'm currently working on) has different requirements: it may for a number of reasons not have the benefit of the setup performed by a complex build process, and have to make the best of what it is given. In other words, it may find a header file not present in the include paths it knows, and have to take its best shot at finding the file itself. I'm currently thinking of using the following algorithm: Start in the directory of the including file. Is the header file found in the current directory or any subdirectory thereof? If so, done. If we are at the root directory, the file doesn't seem to be present on this machine, so skip it. Otherwise move to the parent of the current directory and go to step 2. Is this the best algorithm to use? In particular, does anyone know of any case where a different algorithm would work better?

    Read the article

  • Obscure SPUtility.SendMail Behavior When Manually Passing in Mail Headers

    - by Damon
    There are two ways to send mail in SharePoint: you can either use the mail components from the System.Net namespace, or you can send email using SharePoint's SPUtility.SendMail method.  One of the benefits of the SPUtility.SendMail method is that it uses the mail configuration from SharePoint, so you can manage settings in Central Administration instead of having to go through and modify your web.config file.  SPUtility.SendMail can get the job done, but it's defiantly not as developer friendly as the components from the System.Net namespace.  If you want to CC someone on an email, for example, you do NOT have a nice CC parameter - you have to manually add the CC mail header and pass it into the SPUtility.SendMail method.  I had to do this the other day, and ran into a really obscure issue. If you do NOT pass the headers into the method then SharePoint sends the email using the From Address configured in the Outgoing Mail settings in Central Admin.  If you pass headers into the method, but do not include the from header, then SharePoint sends the mail using the email address of the current user. This can be an issue if your mail server is setup to reject an email from an invalid email address or an email address that is not on your domain.  The way to fix this issue is to always pass in the from header.  If you want to use the configured From address, then you can do the following: SPWebApplication webApp = SPWebApplication.Lookup(new Uri(SPContext.Current.Site.Url)); StringDictionary headers = new StringDictionary(); headers.Add("from", webApp.OutboundMailSenderAddress);

    Read the article

  • download/export emails from webmail

    - by misterjinx
    hello, I'm just switching my hosts and I want to move the emails from my accounts too. In order to have my current emails on the new host I want to download/export them and to import at the other host. In order to do this I use one of the webmail (squirrelmail, roundcube, horde) clients available on my current host. The problem is that except for roundcube, I don't see any download/export option available. And in roundcube I can only select one email at a time and download it as eml. My question is how do I export/download all the emails from one account and import them at the new host? I know this is possible because I remember doing this some time ago using squirrelmail, but I can't find anything related to this now. Thank you.

    Read the article

  • ArchBeat Link-o-Rama for November 13, 2012

    - by Bob Rhubart
    This week on the OTN Solution Architect Homepage Make time to check out this week's features on the OTN Solution Architect Homepage, including: SOA Practitioner Guide: Identifying and Discovering Services Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster OTN ArchBeat Podcast: Are You Future Proof (Conclusion) Keynote: New Paradigms for Application Architecture: From Applications to IT Services I this keynote address from the SOA, Cloud, and Service Technology Symposium, Anne Thomas Manes highlights the importance of adapting to the current trend marked by the convergence of mobile, social and cloud, moving away from app-centric design to service-based solutions. New Solaris Cluster! | Jeff Victor "Oracle Solaris Cluster 4.1 offers both High Availability (HA) and also Scalable Services capabilities," explains Jeff Victor. "HA delivers automatic restart of software on the same cluster node and/or automatic failover from a failed node to a working cluster node. Software and support is available for both x86 and SPARC systems." You'll find download links and other resources in Jeff's short post. ADF BC View Accessor To Centralize Business Logic Processing | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis illustrates one way to implement a use case that requires a comparison between the current row status and the data returned by another query (no master-detail relationship). Thought for the Day "The danger from computers is not that they will eventually get as smart as men, but that we will meanwhile agree to meet them halfway." — Bernard Avishai Source: SoftwareQuotes.com

    Read the article

  • MySQL keeps crashing OS server.. Please help adjust my.ini!

    - by TruMan1
    I have MySQL 5.0 installed on a Windows 2008 machine (3GB RAM). My server crashes on a regular basis (almost once a day) with this error: Changed limits: max_open_files: 2048 max_connections: 800 table_cache: 619 I did not use the heavy InnoDB .ini file, although I am rethinking that I should have? I am worried that big configuration changes will make my current sites stop working. What should I do? Here is my current ini settings: default-character-set=latin1 default-storage-engine=INNODB max_connections=800 query_cache_size=84M table_cache=1520 tmp_table_size=30M thread_cache_size=38 myisam_max_sort_file_size=100G myisam_sort_buffer_size=30M key_buffer_size=129M read_buffer_size=64K read_rnd_buffer_size=256K sort_buffer_size=256K innodb_additional_mem_pool_size=6M innodb_flush_log_at_trx_commit=1 innodb_log_buffer_size=3M innodb_buffer_pool_size=250M innodb_log_file_size=50M innodb_thread_concurrency=10

    Read the article

  • Is there a cross-platform special directory I can use for game save files?

    - by Suds
    I'm developing with LWJGL and Java on a Windows 7 laptop. I've successfully set up saving to the %appdata%\gamename\saves\ or long form c:\users\user\appdata\roaming\gamename\saves\ folder by using File dir = new File(System.getenv("APPDATA") + "\\gamename\\saves\\");. I have hobbyist level experience with Linux, and zero experience with OSX. My game will be fully cross platform. Is System.getenv("APPDATA"); cross platform? If so, where does it point to on Linux or OSX? Is there a best practices alternative that I should use?

    Read the article

  • How To Change Attachment Size in WorkItems in TFS 2010

    - by Ravi
    Recently, I came across an issue where I had to change the size limit for WorkItem Attachments in TFS 2010. I searched all around the internet only to find very little information around it which wasn’t clear honestly. So after breaking my head for sometime, I was successful in doing it. Here are my conclusions and the procedure to do it. 1. You DON’T 'have to' programmatically change it. You can do it directly from IIS webservices. 2. You CAN change it programmatically too, by making an entry into TFS Registry using a small piece of code. Let me show you how it is done from IIS. This is to change the size of attachment to your required value for workItems in TFS 2010 for each collection individually. You must be a TFS Admin to do this ( Login with setup account ) Browse to /WorkItemTracking/v1.0/ConfigurationSettingsService.asmx">/WorkItemTracking/v1.0/ConfigurationSettingsService.asmx">/WorkItemTracking/v1.0/ConfigurationSettingsService.asmx">http://localhost:8080/tfs/<YOUR-COLLECTION-NAME>/WorkItemTracking/v1.0/ConfigurationSettingsService.asmx You’ll see 3 asmx services – GetMaxAttachmentSize, GetWorkItemTrackingVersion and SetMaxAttachmentSize. 4. To know what is the current value of the maximum attachment size for a collection, click on the first service and you’ll the current existing value for this particular collection when you click on ‘invoke’ button. ( value is in bytes ) 5. Now click on the ‘SetMaxAttachmentSize’ webservice and fill in the value of your choice. 6. Reset IIS ( not required honestly, but I did it, just to be sure ) 7. Now try attaching a file greater than the size you’ve set. It’ll fail successfully   Below is the error which you’d see in such scenarios. Let me know if you see any issues & I’ll be happy to help..!

    Read the article

  • Inconsistency between date-time in terminal and clock

    - by Franck
    I can't figure out why I have 5 hours difference between GUI clock and date command in terminal. My bios clock is set to GMT... Any ideas ? franck@franck-ThinkPad-T61:~$ date mercredi 11 avril 2012, 02:48:47 (UTC-0500) franck@franck-ThinkPad-T61:~$ sudo dpkg-reconfigure tzdata Current default time zone: 'Europe/Paris' Local time is now: Wed Apr 11 09:49:02 CEST 2012. Universal Time is now: Wed Apr 11 07:49:02 UTC 2012. franck@franck-ThinkPad-T61:~$ tail /etc/timezone Europe/Paris franck@franck-ThinkPad-T61:~$ date mercredi 11 avril 2012, 02:49:21 (UTC-0500) franck@franck-ThinkPad-T61:~$ sudo dpkg-reconfigure --frontend noninteractive tzdata Current default time zone: 'Europe/Paris' Local time is now: Wed Apr 11 09:49:27 CEST 2012. Universal Time is now: Wed Apr 11 07:49:27 UTC 2012. franck@franck-ThinkPad-T61:~$ date mercredi 11 avril 2012, 02:49:30 (UTC-0500) franck@franck-ThinkPad-T61:~$ sudo cat /etc/default/rcS # # /etc/default/rcS # # Default settings for the scripts in /etc/rcS.d/ # # For information about these variables see the rcS(5) manual page. # # This file belongs to the "initscripts" package. TMPTIME=0 SULOGIN=no DELAYLOGIN=no UTC=yes VERBOSE=no FSCKFIX=no franck@franck-ThinkPad-T61:~$ sudo hwclock --show mer. 11 avril 2012 07:49:48 CDT -0.555705 secondes

    Read the article

  • Importing tab delimited file into array in Visual Basic 2013 [migrated]

    - by JaceG
    I am needing to import a tab delimited text file that has 11 columns and an unknown number of rows (always minimum 3 rows). I would like to import this text file as an array and be able to call data from it as needed, throughout my project. And then, to make things more difficult, I need to replace items in the array, and even add more rows to it as the project goes on (all at runtime). Hopefully someone can suggest code corrections or useful methods. I'm hoping to use something like the array style sMyStrings(3,2), which I believe would be the easiest way to control my data. Any help is gladly appreciated, and worthy of a slab of beer. Here's the coding I have so far: Imports System.IO Imports Microsoft.VisualBasic.FileIO Public Class Main Dim strReadLine As String Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Dim sReader As IO.StreamReader = Nothing Dim sRawString As String = Nothing Dim sMyStrings() As String = Nothing Dim intCount As Integer = -1 Dim intFullLoop As Integer = 0 If IO.File.Exists("C:\MyProject\Hardware.txt") Then ' Make sure the file exists sReader = New IO.StreamReader("C:\MyProject\Hardware.txt") Else MsgBox("File doesn't exist.", MsgBoxStyle.Critical, "Error") End End If Do While sReader.Peek >= 0 ' Make sure you can read beyond the current position sRawString = sReader.ReadLine() ' Read the current line sMyStrings = sRawString.Split(New Char() {Chr(9)}) ' Separate values and store in a string array For Each s As String In sMyStrings ' Loop through the string array intCount = intCount + 1 ' Increment If TextBox1.Text <> "" Then TextBox1.Text = TextBox1.Text & vbCrLf ' Add line feed TextBox1.Text = TextBox1.Text & s ' Add line to debug textbox If intFullLoop > 14 And intCount > -1 And CBool((intCount - 0) / 11 Mod 0) Then cmbSelectHinge.Items.Add(sMyStrings(intCount)) End If Next intCount = -1 intFullLoop = intFullLoop + 1 Loop End Sub

    Read the article

  • windows server secondary plex or windows server default

    - by shiva
    I am new to handling issues on server. when my system tried to reboot I could see two installables to boot from. windows server 2012- current os windows server secondary plex. So when ever there is a system restart the system it stops at this screen. And since I am connecting to this server using RDP I have to wait for the hetzner console to click on of the os to boot. Even though the current os is set as default and time given is 30 sec, it still waits for a user input. So I want to know which of the two should I be using to boot and I just want one os.

    Read the article

  • CUDA & MSI GT60 with Optimus enabled GTX670M?

    - by user1076693
    I have a MSI GT60 Laptop with an Optimus enabled GTX 670M GPU, and I have been trying to get CUDA going in Ubuntu 12.04 environment. I realize that Optimus is not supported in Linux, but I have read the following post suggesting that CUDA works for hybrid GPUs. How can I get nVidia CUDA or OpenCL working on a laptop with nVidia discrete card/Intel Integrated Graphics? I installed the NVIDIA driver via sudo add-apt-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install nvidia-current The resulting driver version is 302.17, and supposedly GTX 670M is supported since 295.59. I also downloaded CUDA 4.2 from the NVIDIA site, and compiled it against nvidia-current libraries. Unfortunately, when I run deviceQuery in the CUDA SDK, I get the following output cudaGetDeviceCount returned 38 -> no CUDA-capable device is detected Checking /proc/driver/nvidia/gpus/0/information gives the following Model: GeForce GTX 670M IRQ: 16 GPU UUID: GPU-????????-????-????-????-???????????? Video BIOS: ??.??.??.??.?? Bus Type: PCI-E DMA Size: 32 bits DMA Mask: 0xffffffffff Bus Location: 0000:01.00.0 Here is the output of "lspci | grep VGA" 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation Device 1213 (rev ff) So... what am I doing wrong? Thanks!

    Read the article

  • Oracle SOA Suite for healthcare integration Dashboard

    - by Nitesh Jain
    Oracle SOA Suite Healthcare came up with a new way of monitoring where user can configure a dashboard and follow the dynamic runtime changes.Oracle SOA Suite for healthcare integration dashboards display information about the current health of the endpoints in a healthcare integration application. You can create and configure multiple dashboards as needed to monitor the status and volume metrics for the endpoints you have defined. The Dashboards reflects changes that occur in the runtime repository, such as purging runtime instance data, new messages processed, and new error messages. You can display data for various time periods, and you can manually refresh the data in real time or set the dashboard to automatically refresh at set intervals.Dashboard shows the following information: Status: The current status of the endpoint, such as Running, Idle, Disabled, or Errors. Messages Sent: The number of messages sent by the endpoint in the specified time period. Messages Received: The number of messages received by the endpoint in the specified time period. Errors: The number of messages with errors for the endpoint in the given time period. Last Sent: The date and time the last message was sent from the endpoint. Last Received: The date and time the last message was received from the endpoint. Last Error: The date and time of the last error for the endpoint.  It also shows the detailed view of a specific Endpoint The document type. The number of messages received per second. The total number of message processed in the specified time period. The average size of each message.

    Read the article

  • How to deal with OpenGL and Fullscreen on OS X

    - by Armin Ronacher
    I do most of my development on OS X and for my current game project this is my target environment. However when I play games I play on Windows. As a windows gamer I am used to Alt+Tab switching from within the game to the last application that was open. On OS X I currently can't find either a game that supports that nor can I find a way to make it possible. My current project is based on SDL 1.3 and I can see that cmd+tab is a sequence that is sent directly to my application and not intercepted by the operating system. Now my first attempt was to hide the rendering window on cmd+tab which certainly works, but has the disadvantage that a hidden OpenGL window in SDL cannot be restored when the user tabs back to the application. First of all, there is no event fired for that or I can't find it, secondly the core problem is that when that application window is hidden, my game is still the active application, just that the window disappeared. That is incredible annoying. Any ideas how to approximate the windows / linux behavior for alt+tab?

    Read the article

  • url mod_rewrite

    - by Pritam Borkar
    I had an e-commerce website hosted on http://mydomain.com/beta for more than a year, eventually I decided to move the website to root http://mydomain.com I had done quite a lot of link postings to forums etc, when my site used to be hosted in the sub-dir /beta . Is there any way to do a mod_rewrite by which all the old links that I have posted do not return as broken links since now longer the site is hosted in /beta and is now hosted on the site root. I did read that mod_rewrite can help resolve this issue, but also read about that this has to be done with care. Just a tip that this site is using Friendl URL.

    Read the article

  • Mysql innoDB corruption after server crash

    - by Ward Loockx
    Yesterday my server died because an outage in the data center. Today it's back up, but having some problems with mysql. First of all my mysql server was not able to start. For this reason I deleted the files ib_logfile0 and ib_logfile1 in /var/lib/mysql folder (I still have the old failing files). After this my server was able to startup again. But now I see a lot of issues in the mysql log file. Sep 1 09:43:55 * mysqld: 120901 9:43:55 InnoDB: Error: page 70944 log sequence number 8 1483471899 Sep 1 09:43:55 * mysqld: InnoDB: is in the future! Current system log sequence number 5 612394935. Sep 1 09:43:55 * mysqld: InnoDB: Your database may be corrupt or you may have copied the InnoDB Sep 1 09:43:55 * mysqld: InnoDB: tablespace but not the InnoDB log files. See Sep 1 09:43:55 * mysqld: InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html When I check the docs on mysql.com, I found that I need to recover my database with backups. I have a backup but not sure what's the good way on importing it. Or is there a way to recover without having to re-import the database again? So if I'm correct I need to put innodb_force_recovery to 4 in mysql and delete all current data and re-import? Is there a way to do this without having downtime? I also have one slave running. This slave has the current status now: Last_Error: Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave. How can I totally reset the slave after the new import on the master has happend? Hopefully we can find a solution without not to much downtime. Thanks!

    Read the article

  • Help building pygtk application

    - by umpirsky
    This is my application. It is created with quickly. I would like to package it for Ubuntu now. I tried to package it with uickly, but that failed. At first, I was trying to install it using setup.py. But it is only copied in python lib dir, no icon, no desktop file installed. Then I tried to follow this guide, but I ended up with package without icon and it was of bad quality, but most important of all, it does not use setup.py, and it was pretty complicated. I would like to automate packaging process Can someone point me in the right direction, some examples of existing apps that have automated packaging etc? Thanks in advance.

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >