Search Results

Search found 1567 results on 63 pages for 'b gen jack o neill'.

Page 12/63 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Should I use two queries, or is there a way to JOIN this in MySQL/PHP?

    - by Jack W-H
    Morning y'all! Basically, I'm using a table to store my main data - called 'Code' - a table called 'Tags' to store the tags for each code entry, and a table called 'code_tags' to intersect it. There's also a table called 'users' which stores information about the users who submitted each bit of code. On my homepage, I want 5 results returned from the database. Each returned result needs to list the code's title, summary, and then fetch the author's firstname based on the ID of the person who submitted it. I've managed to achieve this much so far (woot!). My problem lies when I try to collect all the tags as well. At the moment this is a pretty big query and it's scaring me a little. Here's my problematic query: SELECT code.*, code_tags.*, tags.*, users.firstname AS authorname, users.id AS authorid FROM code, code_tags, tags, users WHERE users.id = code.author AND code_tags.code_id = code.id AND tags.id = code_tags.tag_id ORDER BY date DESC LIMIT 0, 5 What it returns is correct looking data, but several repeated rows for each tag. So for example if a Code entry has 3 tags, it will return an identical row 3 times - except in each of the three returned rows, the tag changes. Does that make sense? How would I go about changing this? Thanks! Jack

    Read the article

  • Are programming languages and methods inefficient? (assembler and C knowledge needed)

    - by b-gen-jack-o-neill
    Hi, for a long time, I am thinking and studying output of C language compiler in assembler form, as well as CPU architecture. I know this may be silly to you, but it seems to me that something is very ineffective. Please, don´t be angry if I am wrong, and there is some reason I do not see for all these principles. I will be very glad if you tell me why is it designed this way. I actually truly believe I am wrong, I know the genius minds of people which get PCs together knew a reason to do so. What exactly, do you ask? I´ll tell you right away, I use C as a example: 1: Stack local scope memory allocation: So, typical local memory allocation uses stack. Just copy esp to ebp and than allocate all the memory via ebp. OK, I would understand this if you explicitly need allocate RAM by default stack values, but if I do understand it correctly, modern OS use paging as a translation layer between application and physical RAM, when address you desire is further translated before reaching actual RAM byte. So why don´t just say 0x00000000 is int a,0x00000004 is int b and so? And access them just by mov 0x00000000,#10? Because you wont actually access memory blocks 0x00000000 and 0x00000004 but those your OS set the paging tables to. Actually, since memory allocation by ebp and esp use indirect addressing, "my" way would be even faster. 2: Variable allocation duplicity: When you run application, Loader load its code into RAM. When you create variable, or string, compiler generates code that pushes these values on the top o stack when created in main. So there is actual instruction for do so, and that actual number in memory. So, there are 2 entries of the same value in RAM. One in form of instruction, second in form of actual bytes in the RAM. But why? Why not to just when declaring variable count at which memory block it would be, than when used, just insert this memory location?

    Read the article

  • Combinationally unique MySQL tables

    - by Jack Webb-Heller
    So, here's the problem (it's probably an easy one :P) This is my table structure: CREATE TABLE `users_awards` ( `user_id` int(11) NOT NULL, `award_id` int(11) NOT NULL, `duplicate` int(11) NOT NULL DEFAULT '0', UNIQUE KEY `award_id` (`award_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 So it's for a user awards system. I don't want my users to be granted the same award multiple times, which is why I have a 'duplicate' field. The query I'm trying is this (with sample data of 3 and 2) : INSERT INTO users_awards (user_id, award_id) VALUES ('3','2') ON DUPLICATE KEY UPDATE duplicate=duplicate+1 So my MySQL is a little rusty, but I set user_id to be a primary key, and award_id to be a UNIQUE key. This (kind of) created the desired effect. When user 1 was given award 2, it entered. If he/she got this twice, only one row would be in the table, and duplicate would be set to 1. And again, 2, etc. When user 2 was given award 1, it entered. If he/she got this twice, duplicate updated, etc. etc. But when user 1 is given award 1 (after user 2 has already been awarded it), user 2 (with award 1)'s duplicate field increases and nothing is added to user 1. Sorry if that's a little n00bish. Really appreciate the help! Jack

    Read the article

  • Implementation of APIs on diferent platforms

    - by b-gen-jack-o-neill
    OK, this is basicly just about any non-default OS API running on all different OS. But for my example let´s consider platform Windows, API SDL (Simple DirectMedia Layer). Actually this question came to my mind when I was reading about SDL. Originally, I thought that on Windows (and basicly any other OS) you must use OS API to make certain actions, like wrtiting to screen, creating window and so on, becouse that API knows what kernel calls and system subroutines calls it has to do. But when I read about SDL, I surprised me, becouse, you cannot make computer to do anything more than OS can, since you cannot acess HW directly, only thru OS API, from Console allocation to DirectX. So, my question actually is, how does this not-default-OS APIs work? Do they use (wrap) original system API (like MFC wraps win32 api)? Or, do they actually have direct acess to Windows kernel? Or is there any third, way in between? Thanks.

    Read the article

  • C/C++ usage of special CPU features

    - by b-gen-jack-o-neill
    Hi, I am curious, do new compilers use some extra features built into new CPUs such as MMX SSE,3DNow! and so? I mean, in original 8086 there was even no FPU, so compiler that old cannot even use it, but new compilers can, since FPU is part of every new CPU. So, does new compilers use new features of CPU? Or, it should be more right to ask, does new C/C++ standart library functions use new features? Thanks for answer.

    Read the article

  • Change default Console I/O functions handle.

    - by b-gen-jack-o-neill
    Hello. Is it possible to somehow change standart I/O functions handle on Windows? Language preffered is C++. If I understand it right, by selecting console project, compiler just pre-allocate console for you, and operates all standart I/O functions to work with its handle. So, what I want to do is to let one Console app actually write into another app Console buffer. I though that I could get first´s Console handle, than pass it to second app by a file (I don´t know much about interprocess comunication, and this seems easy) and than somehow use for example prinf with the first app handle. Can this be done? I know how to get console handle, but I have no idea how to redirect printf to that handle. Its just study-purpose project to more understand of OS work behind this. I am interested in how printf knows what Console it is assiciated with.

    Read the article

  • Different programming languages possibilities

    - by b-gen-jack-o-neill
    Hello. This should be very simple question. There are many programming languages out there, compiled into machine code or managed code. I first started with ASM back in high school. Assembler is very nice, since you know what exactly CPU does. Next, (as you can see from my other questions here) I decided to learn C and C++. I choosed C becouse from what I read it is the language with output most close to assembler-written programs. But, what I want to know is, can any other Windows programming language out there call win32 API? To be exact, like C has its special header and functions for win32 api interactions, is this assumed to be some important part of programming language? Or are there any languages that have no support for calling win32 API, or just use console to IO and some functions for basic file IO? Becouse, for Windows programming with graphic output, it is essential to have acess to win32 API. I know this question might seem silly, but still please, help me, I ask for study porposes. Thanks.

    Read the article

  • UITableView cell appearance update not working - iPhone

    - by Jack Nutkins
    I have this piece of code which I'm using to set the alpha and accessibility of one of my tables cells dependent on a value stored in user defaults: - (void) viewDidAppear:(BOOL)animated{ [self reloadTableData]; } - (void) reloadTableData { if ([[userDefaults objectForKey:@"canDeleteReceipts"] isEqualToString:@"0"]) { NSIndexPath *path = [NSIndexPath indexPathForRow:0 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 0.2; cell.userInteractionEnabled = NO; } else { NSIndexPath *path = [NSIndexPath indexPathForRow:0 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 1; cell.userInteractionEnabled = YES; } if ([[userDefaults objectForKey:@"canDeleteMileages"] isEqualToString:@"0"]) { NSIndexPath *path = [NSIndexPath indexPathForRow:1 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 0.2; cell.userInteractionEnabled = NO; } else { NSIndexPath *path = [NSIndexPath indexPathForRow:1 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 1; cell.userInteractionEnabled = YES; } if ([[userDefaults objectForKey:@"canDeleteAll"] isEqualToString:@"0"]) { NSIndexPath *path = [NSIndexPath indexPathForRow:2 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 0.2; cell.userInteractionEnabled = NO; NSLog(@"In Here"); } else { NSIndexPath *path = [NSIndexPath indexPathForRow:2 inSection:8]; UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:path]; cell.contentView.alpha = 1; cell.userInteractionEnabled = YES; } } The value stored in userDefaults is '0' so the cell in section 8 row 2 should be greyed out and disabled, however this doesn't happen and the cell is still selectable with an alpha of 1... The log statement 'In Here' is called so it seems to be executing the right code, but that code doesn't seem to be doing anything. Can anyone explain what I've done wrong? Thanks, Jack

    Read the article

  • What segments does C compiled program use?

    - by b-gen-jack-o-neill
    Hi, I read on OSDev wiki, that protected mode of x86 architecture allow you to create separate segments for code and data, while you cannot write into code section. That Windows (yes, this is the platform) loads new code into code segment, and data are created on data segment. But, if this is the case, how does program know it must switch segments to the data segment? Becouse if I understand it right, all adress instructions point to the segment you run the code from, unless you switch the descriptor. But I also read, that so colled flat memory model allows you to run code and data within one segment. But I read this only in connection to assembler. So, please, what is the case with C compiled code on Windows? Thanks.

    Read the article

  • C/C++ usage of special CPU fetures

    - by b-gen-jack-o-neill
    Hi, I am curious, do new compilers use some extra features built into new CPUs such as MMX SSE,3DNow! and so? I mean, in original 8086 there was even no FPU, so compiler that old cannot even use it, but new compilers can, since FPU is part of every new CPU. So, does new compilers use new features of CPU? Or, it should be more right to ask, does new C/C++ standart library functions use new features? Thanks for answer.

    Read the article

  • Quick CPU ring mode protection question

    - by b-gen-jack-o-neill
    Hi, me again :) I am very curious in messing up with HW. But my top level "messing" so far was linked or inline assembler in C program. If my understanding of CPU and ring mode is right, I cannot directly from user mode app access some low level CPU features, like disabling interrupts, or changing protected mode segments, so I must use system calls to do everything I want. But, if I am right, drivers can run in ring mode 0. I actually don´t know much about drivers, but this is what I ask for. I just want to know, is learning how to write your own drivers and than call them the way I should go, to do what I wrote? I know I could write whole new OS (at least to some point), but what I exactly want to do is acessing some low level features of HW from standart windows application. So, is driver the way to go?

    Read the article

  • Help me sort programing languages a bit

    - by b-gen-jack-o-neill
    Hi, so I asked here few days ago about C# and its principles. Now, if I may, I have some additional general questions about some languages, becouse for novice like me, it seems a bit confusing. To be exact I want to ask more about language functions capabilities than syntax and so. To be honest, its just these special functions that bothers me and make me so confused. For exmaple, C has its printf(), Pascal has writeln() and so. I know in basic the output in assembler of these funtions would be similiar, every language has more or less its special functions. For console output, for file manipulation, etc. But all these functions are de-facto part of its OS API, so why is for example in C distinguished between C standard library functions and (on Windows) WinAPI functions when even printf() has to use some Windows feature, call some of its function to actually show desired text on console window, becouse the actuall "showing" is done by OS. Where is the line between language functions and system API? Now languages I dont quite understand - Python, Ruby and similiar. To be more specific, I know they are similiar to java and C# in term they are compiled into bytecode. But, I do not unerstand what are its capabilities in term of building GUI applications. I saw tutorial for using Ruby to program GUI applications on Linux and Windows. But isn´t that just some kind of upgrade? I mean fram other tutorials It seemed like these languages was first intended for small scripts than building big applications. I hope you understand why I am confused. If you do, please help me sort it out a bit, I have no one to ask.

    Read the article

  • Device drivers and Windows

    - by b-gen-jack-o-neill
    Hi, I am trying to complete the picture of how the PC and the OS interacts together. And I am at point, where I am little out of guess when it comes to device drivers. Please, don´t write things like its too complicated, or you don´t need to know when using high programming laguage and winapi functions. I want to know, it´s for study purposes. So, the very basic structure of how OS and PC (by PC I mean of course HW) is how I see it is that all other than direct CPU commands, which can CPU do on itself (arithmetic operation, its registers access and memory access) must pass thru OS. Mainly becouse from ring level 3 you cannot use in and out intructions which are used for acesing other HW. I know that there is MMIO,but it must be set by port comunication first. It was not like this all the time. Even I am bit young to remember MSDOS, I know you could access HW directly, becouse there ws no limitation, no ring mode. So you could to write string to diplay use wheather DOS function, or directly acess video card memory and write it by yourself. But as OS developed, there is no longer this possibility. But it is fine, since OS now handles all the HW comunication, and frankly it more convinient and much more safe (I would say the only option) in multitasking environment. So nowdays you instead of using int instructions to use BIOS mapped function or DOS function you call dll which internally than handles everything you don´t need to know about. I understand this. I also undrstand that device drivers is the piece of code that runs in ring level 0, so it can do all the HW interactions. But what I don´t understand is connection between OS and device driver. Let´s take a example - I want to make a sound card make a sound. So I call windows API to acess sound card, but what happens than? Does windows call device drivers to do so? But if it does call device driver, does it mean, that all device drivers which can be called by winAPI function, must have routines named in some specific way? I mean, when I have new sound card, must its drivers have functions named same as the old one? So Windows can actually call the same function from its perspective? But if Windows have predefined sets of functions requored by device drivers, that it cannot use new drivers that doesent existed before last version of OS came out. Please, help me understand this mess. I am really getting mad. Thanks.

    Read the article

  • Controlling the USB from Windows

    - by b-gen-jack-o-neill
    Hi, I know this probably is not the easiest thing to do, but I am trying to connect Microcontroller and PC using USB. I dont want to use internal USART of Microcontroller or USB to RS232 converted, its project indended to help me understand various principles. So, getting the communication done from the Microcontroller side is piece of cake - I mean, when I know he protocol, its relativelly easy to implement it on Micro, becouse I am in direct control of evrything, even precise timing. But this is not the case of PC. I am not very familiar with concept of Windows handling the devices connected. In one of my previous question I ask about how Windows works with devices thru drivers. I understood that for internal use of Windows, drivers must have some default set of functions available to OS. I mean, when OS wants to access HDD, it calls HDD driver (which is probably internal in OS), with specific "questions" so that means that HDD driver has to be written to cooperate with Windows, to have write function in the proper place to be called by the OS. Something similiar is for GPU, Even DirectX, I mean DirectX must call specific functions from drivers, so drivers must be written to work with DX. I know, many functions from WinAPI works on their own, but even "simple" window must be in the end written into framebuffer, using MMIO to adress specified by drivers. Am I right? So, I expected that Windows have internal functions, parts of WinAPI designed to work with certain comonly used things. To call manufacturer-designed drivers. But this seems to not be entirely true becouse Windows has no way to communicate thru Paralel port. I mean, there is no function in the WinAPI to work with serial port, but there are funcions to work with HDD,GPU and so. But now there comes the part I am getting very lost at. So, I think Windows must have some built-in functions to communicate thru USB, becouse for example it handles USB flash memory. So, is there any WinAPI function designed to let user to operate USB thru that function, or when I want to use USB myself, do I have to call desired USB-driver function myself? Becouse all you need to send to USB controller is device adress and the infromation right? I mean, I don´t have to write any new drivers, am I right? Just to call WinAPI function if there is such, or directly call original USB driver. Does any of this make some sense?

    Read the article

  • Holding variables in memory, C++

    - by b-gen-jack-o-neill
    Today something strange came to my mind. When I want to hold some string in C (C++) the old way, without using string header, I just create array and store that string into it. But, I read that any variable definition in C in local scope of function ends up in pushing these values onto the stack. So, the string is actually 2* bigger than needed. Because first, the push instructions are located in memory, but then when they are executed (pushed onto the stack) another "copy" of the string is created. First the push instructions, than the stack space is used for one string. So, why is it this way? Why doesn't compiler just add the string (or other variables) to the program instead of creating them once again when executed? Yes, I know you cannot just have some data inside program block, but it could just be attached to the end of the program, with some jump instruction before. And than, we would just point to these data? Because they are stored in RAM when the program is executed. Thanks.

    Read the article

  • Are programming languages and methods ineffective? (assembler and C knowledge needed)

    - by b-gen-jack-o-neill
    Hi, for a long time, I am thinking and studying output of C language compiler in asemlber form, as well as CPU architecture. I know this may be silly to you, but it seems to me that something is very ineffective. Please, don´t be angry if I am wrong, and there is some reason I do not see for all these principles. I will be very glad if you tell me why is it designed this way. I actually trully believe I am wrong, I know the genius minds of people which get PCs together knew a reason to do so. What exactly, do you ask? I´ll tell you right away, I use C as a example: 1, Stack local scope memory allocation: So, typical local memory allocation uses stack. Just copy esp to ebp and than allocate all the memory via ebp. OK, I would understand this if you explicitly need allocate RAM by default stack values, but if I do understand it correctly, modern OS use paging as a translation layer between application and physical RAM, when adress you desire is further translated before reaching actuall RAM byte. So why don´t just say 0x00000000 is int a,0x00000004 is int b and so? And access them just by mov 0x00000000,#10? Becouse you wont actually access memory blocks 0x00000000 and 0x00000004 but those your OS set the paging tables to. Actually, since memory allocation by ebp and esp use indirect adressing, "my" way would be even faster. 2, Variable allocation duplicitly: When you run aaplication, Loader load its code into RAM. When you create variable, or string, compiler generates code that pushes these values on the top o stack when created in main. So there is actuall instruction for do so, and that actuall number in memory. So, there are 2 entries of the same value in RAM. One in fomr of instruction, second in form of actuall bytes in the RAM. But why? Why not to just when declaring variable count at which memory block it would be, than when used, just insert this memory location?

    Read the article

  • IP to MAC adress translation

    - by b-gen-jack-o-neill
    Hi, please, tell me one thing I can´t understand. On PC networks which uses TCP/IP protocol set, why is ip adress further translated into MAC adress? I mean, when every device knows its IP, why furher use MAC and not to use IP number as adress directly?

    Read the article

  • Still some questions about USB

    - by b-gen-jack-o-neill
    Hi, few days ago I asked here about implementing USB. Now, If I may, would like to ask few more questions, about thing I dind´t quite understood. So, first, If I am right, Windows has device driver for USB interface, for the physical device that sends and receives communication. But what this driver offers to system (user)? I mean, USB protocol is made so its devices are adressed. So you first adress device than send message to it. But how sophisticted is the device controller (HW) and its driver? Is it so sophisticated that it is a chip you just send device adress and data, and it writes the outcomming data out and incomming data to some internal register to be read, or thru DMA directly to memory? Or, how its drivers (SW) really work? Does its driver has some advanced functions like send _data to _device? Becouse I somewhat internally hope there is a way to directly send some data thru USB, maybe by calling USB drivers itself? Is there any good article, tutorial you know about to really explain how all this logic works? Thanks.

    Read the article

  • 2 basic but interesting questions about .NET

    - by b-gen-jack-o-neill
    Hi, when I first saw C#, I thought this must be some joke. I was starting with programming in C. But in C# you could just drag and drop objects, and just write event code to them. It was so simple. Now, I still like C the most, becouse I am very attracted to the basic low level operations, and C is just next level of assembler, with few basic routines, so I like it very much. Even more becouse I write little apps for microcontrollers. But yeasterday I wrote very simple control program for my microcontroller based LED cube in asm, and I needed some way to simply create animation sequences to the Cube. So, I remembered C#. I have practically NO C# skills, but still I created simple program to make animation sequences in about hour with GUI, just with help of google and help of the embeded function descriptions in C#. So, to get to the point, is there some other reason then top speed, to use any other language than C#? I mean, it is so effective. I know that Java is a bit of similiar, but I expect C# to be more Windows effective since its directly from Microsoft. The second question is, what is the advantage of compiling into CIL, and than run by CLR, than directly compile it into machine code? I know that portability is one, but since C# is mainly for Windows, wouldn´t it be more powerfull to just compile it directly? Thanks.

    Read the article

  • laptop motherboard "shorts" when connected to adapter

    - by Bash
    Disclaimer: I'm sort of a noob, and this is a long post. Thank you all in advance! summary: completely dead laptop with no signs of life whatsoever (suddenly, for no apparent reason) Here's the deal: Lenovo Y470 (only a few months old with no water or shock damage). It stopped working suddenly (no lights, no sound, even when connecting adapter with or without battery). I tried a different adapter (same electrical rating), but no luck. I disassembled the thing completely, and tried plugging in the adapter and looking for signs of life with all different combinations of components installed (tried all combinations of RAM, CPU, USB power cords, screen, etc plugged in). no luck. Then, I noticed (as I was plugging in the adapter to try for the millionth time) that there was a "spark" for an instant when I first connect the adapter to the power jack. The adapter's LED would then flash (indicating it isn't working or charging). So, I thought the power jack has a short of some sort (due to bad soldering or something). Scanned virtually every single component on the motherboard, and tested the power jack connections with a multimeter. No shorts or damage to anything on the entire motherboard. Now I'm thinking I need to replace the motherboard. But, my actual question: What does this "shorting" when connecting the adapter signify? (btw, the voltage across the power connections and current through it drop to virtually zero when the adapter is connected and "sparks", and they stay that way). The bewildering thing is that there are no damaged components, and the voltage across adapter terminals returns to normal after I disconnect it (so it's not damaged). Please take a look at the pictures (of the motherboard's power connection and nearby components) and see if I'm missing something completely obvious... Links to pictures and laptop and motherboard model: pictures on DropBox Motherboard model: LA-6881P Laptop model: Lenovo IdeaPad Y470

    Read the article

  • Windows XP can use a wired network port, but MacBook (OS X) fails on the same port

    - by Dean Hill
    I wired the Cat5 in my house seven years ago. The wired ports have worked fine with both my Windows XP laptop and MacBook. My wireless network also works fine, but I like to use wired occasionally. One of the Cat5 runs wasn't terminated with a jack, so I recently terminated this wire with a port/jack on the wall end and a standard Cat5 plug on the end that plugs into my router. This is the same setup as my other runs. Unfortunately, the MacBook isn't working well with the new wired port. The OS X Network System Preferences show the IP, Subnet, Router, etc., and everything looks fine. A "netstat -ibd" shows no errors or dropped packets. However, when I open a page in Safari, the status says "Contacting 'www.google.com'" and appears to hang. If I wait for a couple minutes, part of the Google page starts to display, but it is still not the full page load. When I use a Windows XP laptop on the same wired port, everything works fine. An internet speed test shows good results and all web pages load fine. A "netstat -e" under Windows shows no errors. I've used a Cat5 tester, and the cable tests fine (wires 1-8 light up in sequence). I've replaced both the port/jack and the connector twice to make sure I wired things correctly. I'd really like this Cat5 to work with the MacBook (and I'm trying to avoid running a new length of cable). Any ideas what the problem could be?

    Read the article

  • Convert a cassette tape recording to digital format

    - by Electric Automation
    Has anyone been successful with transferring audio cassette tape recordings to a digital format? I would like to preserve old cassette tape recordings of my grandparents to some digital format: MP3, WAV, etc... The quality of the tapes are mediocre. I think I can handle the quality restoration but getting the audio from tape to digital is my question. Below is a list of the hardware that I can work with: Cassette Deck: I have a Technics stereo cassette deck model RS-B12. It has separate left and right IN and OUT RCA type jacks on the back. In the front it has a headphone phono jack, plus left and right mic input phono jacks. On the computer side: -I have a Windows Vista PC with no additional software other than what came with the machine from Costco. No sound editing software that I can see. There is no sound card on the PC. On the front panel there is a mini-phono mic input jack and there are several different types of in/out mini-phono jacks on the back. In addition, USB and Firewire. I also have access to a new (2009) iMac with a mini-phono input jack for a powered mic or other audio source and GarageBand that has come with the computer. In addition, USB and Firewire. What are my options for getting these cassette recordings into a digital format? Whats the best format? What sort of wires would I need and will I want to utilize the USB or Firewire or can I simply use the audio inputs on the PC (or Mac) to receive the audio stream?

    Read the article

  • Convert a cassette tape recording to digital format

    - by Optimal Solutions
    Has anyone been successful with transferring audio cassette tape recordings to a digital format? I would like to preserve old cassette tape recordings of my grandparents to some digital format: MP3, WAV, etc... The quality of the tapes are mediocre. I think I can handle the quality restoration but getting the audio from tape to digital is my question. Below is a list of the hardware that I can work with: Cassette Deck: I have a Technics stereo cassette deck model RS-B12. It has separate left and right IN and OUT RCA type jacks on the back. In the front it has a headphone phono jack, plus left and right mic input phono jacks. On the computer side: -I have a Windows Vista PC with no additional software other than what came with the machine from Costco. No sound editing software that I can see. There is no sound card on the PC. On the front panel there is a mini-phono mic input jack and there are several different types of in/out mini-phono jacks on the back. In addition, USB and Firewire. I also have access to a new (2009) iMac with a mini-phono input jack for a powered mic or other audio source and GarageBand that has come with the computer. In addition, USB and Firewire. What are my options for getting these cassette recordings into a digital format? Whats the best format? What sort of wires would I need and will I want to utilize the USB or Firewire or can I simply use the audio inputs on the PC (or Mac) to receive the audio stream?

    Read the article

  • My home router randomly disconnects me and I'm unable to reconnect to it

    - by Roy Tang
    It's happened a few times, I'm not sure how to diagnose/debug, so any advise would be appreciated. Symptons: sometimes the router will randomly disconnect; the connection icon on my desktop (wired to the router) gets that yellow "!" symbol that tells me my connection just went down. At this point I'm unable to ping the router. afterwards I try to reset the router by removing then reconnecting the power jack on the router side (this is the fastest way as I can't reset the power strip it's connected to without rebooting my desktop. the router has a reset thingy, but it's one of those things where i have to find a pin to stick into the hole, and when I get disconnected I usually need to get reconnected immediately so I just pull and put back the power jack), but even after that the connection has the same state. after the router reboots, if I try to connect to it using a wifi device like my ipad, the ipad prompts me for the wifi password even though it had already "remembered" all the settings for this router forever after i finally decide to reboot the power strip, and my desktop and the router boot up again, the connection returns to its normal state somewhat and i'm able to connect to it as normal using the desktop and wifi devices. What do I need to check the next time this happens so I can figure out the problem? Is it possibly because we've been using the power jack on the router as an easier way to reboot it? Should I be shopping around for a new router? If it helps, the router is a DLink DIR-300

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >