Search Results

Search found 25362 results on 1015 pages for 'compiling from source'.

Page 360/1015 | < Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >

  • How to Hide the code of HTML5 games [closed]

    - by jeyanthinath
    Possible Duplicate: HTML5 game obfuscation I am begin to develop games in HTML5 and I had doubt that , when we use the game in online its source can be visible to others even if we use complex code and reference to java-script files , then what is the use of HTML5 even everyone can be able to download the code and still use their updated version Is it possible to hide the code of HTML5 in web page games OR there some other way it can made it not visible to the users !!! If not what is the use of HTML5 as it is open to user as well !!!

    Read the article

  • What type of interview questions should you ask for "legacy" programmers?

    - by Marcus Swope
    We have recently been receiving lots of applicants for our open developer positions from people who I like to refer to as "legacy" programmers. I don't like the term "old" because it seems a little prejudiced (especially to HR!) and it doesn't accurately reflect what I mean. We are a company that does primarily .NET development using TDD in an Agile environment, we use Git as a source control system, we make heavy use of OSS tools and projects and we contribute to them as well, we have a strong bias towards adhering to strong Object-Oriented principles, SOLID, etc, etc, etc... Now, the normal list of questions that we ask doesn't really seem to apply to applicants that are fresh out of school, nor does it seem to apply to these "legacy" programmers. Here is how I (loosely) define a "legacy" programmer. Spent a significant amount of their career working almost exclusively with Assembly/Machine Languages. Primary accomplishments include work done with TANDEM systems. Has extensive experience with technologies like FoxPro and ColdFusion It's not that we somehow think that what we do is "better" than what they do, on the contrary, we respect these types of applicants and we are scared that we may be missing a good candidate. It is just very difficult to get a good read on someone who is essentially speaking a different language than you. To someone like this, it seems a little strange to ask a question like: What is the difference between an abstract class and an interface? Because, I would think that they would almost never know the answer or even what I'm talking about. However, I don't want to eliminate someone who could be a very good candidate in their own right and could be able to eventually learn the stuff that we do. But, I also don't want to just ask a bunch of behavioral questions, because I want to know about their technical background as well. Am I being too naive? Should "legacy" programmers like this already know about things like TDD, source control strategies, and best practices for object-oriented programming? If not, what questions should we ask to get a good representation about whether or not they are still able to learn them and be able to keep up in our fast-paced environment? EDIT: I'm not concerned with whether or not applicants that meet these criteria are in general capable or incapable, as I have already stated that I believe that they can be 100% capable. I am more interested in figuring out how to evaluate their talents, as I am having a hard time figuring out how to determine if they are an A+ "legacy" programmer or if they are a D- "legacy" programmer. I've worked with both.

    Read the article

  • Androids development life cycle model query [closed]

    - by Andrew Rose
    I have been currently researching Google and their approach to marketing the Android OS. Primarily using an open source technique with the Open Hand Alliance and out souring through third-party developers. I'm now keen to investigate their approach using various development life cycle models in the form of waterfall, spiral, scrum, agile etc. And i'm just curious to have some feedback from professionals and what approach they think Google would use to have a positive effect on their business. Thanks for your time Andy Rose

    Read the article

  • Differences by pasting formatted text in Word and OneNote

    - by Marko Apfel
    By pasting formatted text in Word and OneNote both applications act a little bit different. Meanwhile Word supports RTF-formatting OneNote does not. OneNote could only handle HTML-formatting. In combination with presenting source code for Visual Studio the Add-in CopySourceAsHtml is available. During copying with Edit > Copy As HTML some option must set – notably Include RTF should be deactivated:

    Read the article

  • unable to save files from Code Blocks ONLY

    - by ths
    i have an NTFS drive mounted in a folder /Tejas i have created a new project using it in a folder in this drive but i am unable to save the changes, i get the following error message Couldn't save project /Tejas/Project/codeblock/ciphers/ciphers.cbp (Maybe the file is write-protected?) i get similar message even when i try to save the c source file i am able to edit and save files using gedit editor... why am i getting this problem?

    Read the article

  • Using an AGPL 3.0-licensed library for extra functionality in an iOS app

    - by Arnold Sakhnov
    I am building an iOS application, and I am planning to incorporate an AGPL 3.0-licensed library to it to provide some extra (non-essential) functionality. I’ve got a couple of questions regarding this: Do I understand it right that this still obliges me to publish the source of my app open? If yes, does it have to be open to the general public, or only to those who have downloaded the app? Does this allow me to distribute the app for a fee?

    Read the article

  • Columnar Databases

    - by jchang
    Ingres just published a TPC-H benchmark for VectorWise , an analytic database technology employing 1) SIMD processing (Intel SSE 4.2), 2) better memory optimizations to leverage on-chip cache, 3) compression, 4) Column-based storage. Ingres originated as a research project at UC Berkeley (see Wikipedia ) in the 1970s, and has since become a commercially supported, open source database system. Apparently, Ingres project people later founded Sybase. So Ingres in a sense, is the grandfather (or perhap...(read more)

    Read the article

  • What is the oldest living piece of unaltered production code? [closed]

    - by user1598390
    It's come to my mind that parts of the code in, say, Unix, has maybe passed unaltered from one version or flavor into another. Maybe some pieces of the source code of the ls command is the same, unaltered, than was written years ago. Have any of you read or learn about this ? What would be the oldest living piece of unaltered production code still running, passing from version through version of a program or system ? Will the code we write outlive us for decades ?

    Read the article

  • NUMA-aware placement of communication variables

    - by Dave
    For classic NUMA-aware programming I'm typically most concerned about simple cold, capacity and compulsory misses and whether we can satisfy the miss by locally connected memory or whether we have to pull the line from its home node over the coherent interconnect -- we'd like to minimize channel contention and conserve interconnect bandwidth. That is, for this style of programming we're quite aware of where memory is homed relative to the threads that will be accessing it. Ideally, a page is collocated on the node with the thread that's expected to most frequently access the page, as simple misses on the page can be satisfied without resorting to transferring the line over the interconnect. The default "first touch" NUMA page placement policy tends to work reasonable well in this regard. When a virtual page is first accessed, the operating system will attempt to provision and map that virtual page to a physical page allocated from the node where the accessing thread is running. It's worth noting that the node-level memory interleaving granularity is usually a multiple of the page size, so we can say that a given page P resides on some node N. That is, the memory underlying a page resides on just one node. But when thinking about accesses to heavily-written communication variables we normally consider what caches the lines underlying such variables might be resident in, and in what states. We want to minimize coherence misses and cache probe activity and interconnect traffic in general. I don't usually give much thought to the location of the home NUMA node underlying such highly shared variables. On a SPARC T5440, for instance, which consists of 4 T2+ processors connected by a central coherence hub, the home node and placement of heavily accessed communication variables has very little impact on performance. The variables are frequently accessed so likely in M-state in some cache, and the location of the home node is of little consequence because a requester can use cache-to-cache transfers to get the line. Or at least that's what I thought. Recently, though, I was exploring a simple shared memory point-to-point communication model where a client writes a request into a request mailbox and then busy-waits on a response variable. It's a simple example of delegation based on message passing. The server polls the request mailbox, and having fetched a new request value, performs some operation and then writes a reply value into the response variable. As noted above, on a T5440 performance is insensitive to the placement of the communication variables -- the request and response mailbox words. But on a Sun/Oracle X4800 I noticed that was not the case and that NUMA placement of the communication variables was actually quite important. For background an X4800 system consists of 8 Intel X7560 Xeons . Each package (socket) has 8 cores with 2 contexts per core, so the system is 8x8x2. Each package is also a NUMA node and has locally attached memory. Every package has 3 point-to-point QPI links for cache coherence, and the system is configured with a twisted ladder "mobius" topology. The cache coherence fabric is glueless -- there's not central arbiter or coherence hub. The maximum distance between any two nodes is just 2 hops over the QPI links. For any given node, 3 other nodes are 1 hop distant and the remaining 4 nodes are 2 hops distant. Using a single request (client) thread and a single response (server) thread, a benchmark harness explored all permutations of NUMA placement for the two threads and the two communication variables, measuring the average round-trip-time and throughput rate between the client and server. In this benchmark the server simply acts as a simple transponder, writing the request value plus 1 back into the reply field, so there's no particular computation phase and we're only measuring communication overheads. In addition to varying the placement of communication variables over pairs of nodes, we also explored variations where both variables were placed on one page (and thus on one node) -- either on the same cache line or different cache lines -- while varying the node where the variables reside along with the placement of the threads. The key observation was that if the client and server threads were on different nodes, then the best placement of variables was to have the request variable (written by the client and read by the server) reside on the same node as the client thread, and to place the response variable (written by the server and read by the client) on the same node as the server. That is, if you have a variable that's to be written by one thread and read by another, it should be homed with the writer thread. For our simple client-server model that means using split request and response communication variables with unidirectional message flow on a given page. This can yield up to twice the throughput of less favorable placement strategies. Our X4800 uses the QPI 1.0 protocol with source-based snooping. Briefly, when node A needs to probe a cache line it fires off snoop requests to all the nodes in the system. Those recipients then forward their response not to the original requester, but to the home node H of the cache line. H waits for and collects the responses, adjudicates and resolves conflicts and ensures memory-model ordering, and then sends a definitive reply back to the original requester A. If some node B needed to transfer the line to A, it will do so by cache-to-cache transfer and let H know about the disposition of the cache line. A needs to wait for the authoritative response from H. So if a thread on node A wants to write a value to be read by a thread on node B, the latency is dependent on the distances between A, B, and H. We observe the best performance when the written-to variable is co-homed with the writer A. That is, we want H and A to be the same node, as the writer doesn't need the home to respond over the QPI link, as the writer and the home reside on the very same node. With architecturally informed placement of communication variables we eliminate at least one QPI hop from the critical path. Newer Intel processors use the QPI 1.1 coherence protocol with home-based snooping. As noted above, under source-snooping a requester broadcasts snoop requests to all nodes. Those nodes send their response to the home node of the location, which provides memory ordering, reconciles conflicts, etc., and then posts a definitive reply to the requester. In home-based snooping the snoop probe goes directly to the home node and are not broadcast. The home node can consult snoop filters -- if present -- and send out requests to retrieve the line if necessary. The 3rd party owner of the line, if any, can respond either to the home or the original requester (or even to both) according to the protocol policies. There are myriad variations that have been implemented, and unfortunately vendor terminology doesn't always agree between vendors or with the academic taxonomy papers. The key is that home-snooping enables the use of a snoop filter to reduce interconnect traffic. And while home-snooping might have a longer critical path (latency) than source-based snooping, it also may require fewer messages and less overall bandwidth. It'll be interesting to reprise these experiments on a platform with home-based snooping. While collecting data I also noticed that there are placement concerns even in the seemingly trivial case when both threads and both variables reside on a single node. Internally, the cores on each X7560 package are connected by an internal ring. (Actually there are multiple contra-rotating rings). And the last-level on-chip cache (LLC) is partitioned in banks or slices, which with each slice being associated with a core on the ring topology. A hardware hash function associates each physical address with a specific home bank. Thus we face distance and topology concerns even for intra-package communications, although the latencies are not nearly the magnitude we see inter-package. I've not seen such communication distance artifacts on the T2+, where the cache banks are connected to the cores via a high-speed crossbar instead of a ring -- communication latencies seem more regular.

    Read the article

  • web vs desktop? (php vs c++?)

    - by Dhaivat Pandya
    I need to write a simple file transfer mechanism (that isn't ftp). Firstly, it must have a GUI. Secondly, it must not be dropbox. Third, it may not use any paid libraries, and hopefully, it uses open source components. The question that came to my mind is, where is everyone moving, from desktop to web, or from web to desktop? Would it be more useful to be experienced in say, C++ than in PHP (or vice versa)?

    Read the article

  • How do I set up nvidia graphics adapter to put out 1080p, it seems to be using interlace mode>

    - by keepitsimpleengineer
    After upgrading to 12.04, my mythbuntu client/server seems to be running in 1080i, the clue comes from: [ 1176.117] (II) NVIDIA(0): Setting mode "1920x1080_60i" [ 1231.340] (II) NVIDIA(0): Setting mode "DFP-1:1920x1080_60@1920x1080+0+0" This is from Xorg.0.log. This whole thing started from video tearing when watching Mythtv recordings. It didn't happen in 10.10. Should I use "TVStandard" "HD1080p" in the screen section since this is a dedicated HTPC? It only connects to an HDTV (1080p) via hdmi. Here is the current xorg.conf file: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 270.29 (buildd@allspice) Fri Feb 25 14:42:07 UTC 2011 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup # InputDevice "Keyboard0" "CoreKeyboard" # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup # InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" FontPath "unix/:7100" EndSection # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup #Section "InputDevice" # # generated from default # Identifier "Mouse0" # Driver "mouse" # Option "Protocol" "auto" # Option "Device" "/dev/psaux" # Option "Emulate3Buttons" "no" # Option "ZAxisMapping" "4 5" #EndSection # commented out by update-manager, HAL is now used and auto-detects devices # Keyboard settings are now read from /etc/default/console-setup #Section "InputDevice" # # generated from default # Identifier "Keyboard0" # Driver "kbd" #EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "SAMSUNG" HorizSync 26.0 - 81.0 VertRefresh 24.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 240" Option "TripleBuffer" "1" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "metamodes" "DFP: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection After a little digging, the question changes slightly, to wit... Per Chapter 19 of nvidia README... "If the EDID for the display device reported a preferred mode timing, and that mode timing is considered a valid mode, then that mode is used as the "nvidia-auto-select" mode." The EDID for my HDMI connected LCD monitor says use first device as preferred. Prefer first detailed timing : Yes Also: (--) NVIDIA(0): EDID maximum pixel clock : 230.0 MHz The list: (from startx -- -verbose 6 ) (--) NVIDIA(0): Detailed Timings: (--) NVIDIA(0): 1920 x 1080 @ 60 Hz (--) NVIDIA(0): Pixel Clock : 148.50 MHz (--) NVIDIA(0): HRes, HSyncStart : 1920, 2008 (--) NVIDIA(0): HSyncEnd, HTotal : 2052, 2200 (--) NVIDIA(0): VRes, VSyncStart : 1080, 1084 (--) NVIDIA(0): VSyncEnd, VTotal : 1089, 1125 (--) NVIDIA(0): H/V Polarity : +/+ This is the actual mode selected: (from xorg.0.log) (--) NVIDIA(0): 1920 x 1080 @ 60 Hz (--) NVIDIA(0): Pixel Clock : 74.18 MHz (--) NVIDIA(0): HRes, HSyncStart : 1920, 2008 (--) NVIDIA(0): HSyncEnd, HTotal : 2052, 2200 (--) NVIDIA(0): VRes, VSyncStart : 1080, 1084 (--) NVIDIA(0): VSyncEnd, VTotal : 1094, 1124 (--) NVIDIA(0): H/V Polarity : +/+ (--) NVIDIA(0): Extra : Interlaced (--) NVIDIA(0): CEA Format : 5 So my HTPC is down-converting to 1080i and then the Monitor is up-converting to 1080p How can I fix this, please?

    Read the article

  • Windows Azure Diagnostics: Next to Useless?

    - by Your DisplayName here!
    To quote my good friend Christian: “Tracing is probably one of the most discussed topics in the Windows Azure world. Not because it is freaking cool – but because it can be very tedious and partly massively counter-intuitive.” <rant> The .NET Framework has this wonderful facility called TraceSource. You define a named trace and route that to a configurable listener. This gives you a lot of flexibility – you can create a single trace file – or multiple ones. There is even nice tooling around that. SvcTraceViewer from the SDK let’s you open the XML trace files – you can filter and sort by trace source and event type, aggreate multiple files…blablabla. Just what you would expect from a decent tracing infrastructure. Now comes Windows Azure. I was already were grateful that starting with the SDK 1.2 we finally had a way to do tracing and diagnostics in the cloud (kudos!). But the way the Azure DiagnosticMonitor is currently implemented – could be called flawed. The Azure SDK provides a DiagnosticsMonitorTraceListener – which is the right way to go. The only problem is, that way this works is, that all traces (from all sources) get written to an ETW trace. Then the DiagMon listens to these traces and copies them periodically to your storage account. So far so good. But guess what happens to your nice trace files: the trace source names get “lost”. They appear in your message text at the end. So much for filtering and sorting and aggregating (regex #fail or #win??). Every trace line becomes an entry in a Azure Storage Table – the svclog format is gone. So much for the existing tooling. To solve that problem, one workaround was to write your own trace listener (!) that creates svclog files inside of local storage and use the DiagMon to copy those. Christian has a blog post about that. OK done that. Now it turns out that this mechanism does not work anymore in 1.3 with FullIIS (see here). Quoting: “Some IIS 7.0 logs not collected due to permissions issues...The root cause to both of these issues is the permissions on the log files.” And the workaround: “To read the files yourself, log on to the instance with a remote desktop connection.” Now then have fun with your multi-instance deployments…. </rant>

    Read the article

  • runtime error: invalid memory address or nil pointer dereference

    - by Klink
    I want to learn OpenGL 3.0 with golang. But when i try to compile some code, i get many errors. package main import ( "os" //"errors" "fmt" //gl "github.com/chsc/gogl/gl33" //"github.com/jteeuwen/glfw" "github.com/go-gl/gl" "github.com/go-gl/glfw" "runtime" "time" ) var ( width int = 640 height int = 480 ) var ( points = []float32{0.0, 0.8, -0.8, -0.8, 0.8, -0.8} ) func initScene() { gl.Init() gl.ClearColor(0.0, 0.5, 1.0, 1.0) gl.Enable(gl.CULL_FACE) gl.Viewport(0, 0, 800, 600) } func glfwInitWindowContext() { if err := glfw.Init(); err != nil { fmt.Fprintf(os.Stderr, "glfw_Init: %s\n", err) glfw.Terminate() } glfw.OpenWindowHint(glfw.FsaaSamples, 1) glfw.OpenWindowHint(glfw.WindowNoResize, 1) if err := glfw.OpenWindow(width, height, 0, 0, 0, 0, 32, 0, glfw.Windowed); err != nil { fmt.Fprintf(os.Stderr, "glfw_Window: %s\n", err) glfw.CloseWindow() } glfw.SetSwapInterval(1) glfw.SetWindowTitle("Title") } func drawScene() { for glfw.WindowParam(glfw.Opened) == 1 { gl.Clear(gl.COLOR_BUFFER_BIT) vertexShaderSrc := `#version 120 attribute vec2 coord2d; void main(void) { gl_Position = vec4(coord2d, 0.0, 1.0); }` vertexShader := gl.CreateShader(gl.VERTEX_SHADER) vertexShader.Source(vertexShaderSrc) vertexShader.Compile() fragmentShaderSrc := `#version 120 void main(void) { gl_FragColor[0] = 0.0; gl_FragColor[1] = 0.0; gl_FragColor[2] = 1.0; }` fragmentShader := gl.CreateShader(gl.FRAGMENT_SHADER) fragmentShader.Source(fragmentShaderSrc) fragmentShader.Compile() program := gl.CreateProgram() program.AttachShader(vertexShader) program.AttachShader(fragmentShader) program.Link() attribute_coord2d := program.GetAttribLocation("coord2d") program.Use() //attribute_coord2d.AttribPointer(size, typ, normalized, stride, pointer) attribute_coord2d.EnableArray() attribute_coord2d.AttribPointer(0, 3, false, 0, &(points[0])) //gl.DrawArrays(gl.TRIANGLES, 0, len(points)) gl.DrawArrays(gl.TRIANGLES, 0, 3) glfw.SwapBuffers() inputHandler() time.Sleep(100 * time.Millisecond) } } func inputHandler() { glfw.Enable(glfw.StickyKeys) if glfw.Key(glfw.KeyEsc) == glfw.KeyPress { //gl.DeleteBuffers(2, &uiVBO[0]) glfw.Terminate() } if glfw.Key(glfw.KeyF2) == glfw.KeyPress { glfw.SetWindowTitle("Title2") fmt.Println("Changed to 'Title2'") fmt.Println(len(points)) } if glfw.Key(glfw.KeyF1) == glfw.KeyPress { glfw.SetWindowTitle("Title1") fmt.Println("Changed to 'Title1'") } } func main() { runtime.LockOSThread() glfwInitWindowContext() initScene() drawScene() } And after that: panic: runtime error: invalid memory address or nil pointer dereference [signal 0xb code=0x1 addr=0x0 pc=0x41bc6f74] goroutine 1 [syscall]: github.com/go-gl/gl._Cfunc_glDrawArrays(0x4, 0x7f8500000003) /tmp/go-build463568685/github.com/go-gl/gl/_obj/_cgo_defun.c:610 +0x2f github.com/go-gl/gl.DrawArrays(0x4, 0x3, 0x0, 0x45bd70) /tmp/go-build463568685/github.com/go-gl/gl/_obj/gl.cgo1.go:1922 +0x33 main.drawScene() /home/klink/Dev/Go/gogl/gopher/exper.go:85 +0x1e6 main.main() /home/klink/Dev/Go/gogl/gopher/exper.go:116 +0x27 goroutine 2 [syscall]: created by runtime.main /build/buildd/golang-1/src/pkg/runtime/proc.c:221 exit status 2

    Read the article

  • asus n550jv audio problem: no sound from notebook' speakers

    - by skywalker
    Ubuntu 13.10. The problem is: the internal speakers don't work. I have no problem when I'm using the headphones. There is no hardware issue since in windows 8 everything works perfectly(external subwoofer included). I'm trying to modify /etc/modprobe.d/alsa-base.conf but I can't find the correct model to put into: options snd-hda-intel model= The file HD-Audio-Models.txt doesn't contain the model for ALC668. Some info: :~sudo aplay -l **** List of PLAYBACK Hardware Devices **** card 0: MID [HDA Intel MID], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: MID [HDA Intel MID], device 7: HDMI 1 [HDMI 1] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: MID [HDA Intel MID], device 8: HDMI 2 [HDMI 2] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: PCH [HDA Intel PCH], device 0: ALC668 Analog [ALC668 Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 :~$ sudo lspci -v | grep -A7 -i "audio" 00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06) Subsystem: Intel Corporation Device 2010 Flags: bus master, fast devsel, latency 0, IRQ 52 Memory at f7a14000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Kernel driver in use: snd_hda_intel -- 00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller (rev 04) Subsystem: ASUSTeK Computer Inc. Device 11cd Flags: bus master, fast devsel, latency 0, IRQ 53 Memory at f7a10000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel PS info :~$ amixer -c 0 Simple mixer control 'IEC958',0 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] Simple mixer control 'IEC958',1 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] Simple mixer control 'IEC958',2 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] :~$ pacmd dump-volumes Welcome to PulseAudio! Use "help" for usage information. Sink 0: reference = 0: 76% 1: 76%, real = 0: 76% 1: 76%, soft = 0: 100% 1: 100%, current_hw = 0: 76% 1: 76%, save = yes Input 8: volume = 0: 100% 1: 100%, reference_ratio = 0: 100% 1: 100%, real_ratio = 0: 100% 1: 100%, soft = 0: 100% 1: 100%, volume_factor = 0: 100% 1: 100%, volume_factor_sink = 0: 100% 1: 100%, save = no Source 0: reference = 0: 100% 1: 100%, real = 0: 100% 1: 100%, soft = 0: 100% 1: 100%, current_hw = 0: 100% 1: 100%, save = no Source 1: reference = 0: 16% 1: 16%, real = 0: 16% 1: 16%, soft = 0: 100% 1: 100%, current_hw = 0: 16% 1: 16%, save = yes

    Read the article

  • How to get Linux in your office

    <b>Tux Radar:</b> "And with Linux and free software making a name for itself in the world of big business, many more people are testing the feasibility of switching small and home office software to their open source equivalents."

    Read the article

  • Evolution cannot open address book? It also can no longer sync with google contacts. 12.10

    - by chad
    When I press the contacts button it gives me the following error message Unable to open address book This address book cannot be opened. This either means that an incorrect URI was entered, or the server is unreachable. Detailed error message: No such source for UID '1352672030.12033.3@chad-Lenovo-G570' Also when I try to create a new google address book it will not import contacts. Please do not tell me to use thunderbird Please help! Thanks

    Read the article

  • Does the SPDY protocol eliminate the need for cookieless domains?

    - by Clint Pachl
    With plain HTTP, cookieless domains were an optimization to avoid unnecessarily sending cookie headers for page resources. However, the SPDY protocol compresses HTTP headers and in some cases eliminates unnecessary headers. My question then is, does SPDY make cookieless domains irrelevant? Furthermore, should the page source and all of its resources be hosted at the same domain in order to optimize a SPDY implementation?

    Read the article

  • Moving from Tortoise to TFS

    - by MarkPearl
    The Past A few years ago my small software company made the jump from storing code on a shared folder to source code control. At the time we had evaluated a few of the options and settled on Tortoise SVN. The main motivation for going the SVN route was that we found a great plugin for Visual Studio that allowed us to avoid the command prompt for uploading changes (like I said we are windows programmers… command prompt bad!! ) and it was free. Up to now we have been pretty happy with SVN as it removed many of the worries that I had about how safe my code was on a shared folder and also gave us the opportunity to safely have several developers work on the same project at the same time. The only times when we have been unhappy has been when we have had SVN hell days – which pretty much occur when you are doing something out of the norm and suddenly SVN just won’t resolve conflicts or something along those lines. This happens once every 4 or 5 months and is not necessarily a problem caused directly by SVN – but a problem augmented by SVN. When you have SVN hell days you want to curse SVN! With that in mind I recently have been relooking at our source code control. I have explored using GIT and was very impressed by it and have also looked at TFS. From a source code control perspective I don’t want to get into a heated discussion on which one is better – but I do want to mention that I wear two hats in my organization – software developer & manager, and with the manager hat on I tend to sway the TFS route. So when I was given a coupon to test DiscountASP.Net Team Foundation Server Service for a year, I thought it was the perfect opportunity to try TFS in a distributed environment and also make the first step towards having an integrated development management system. Some of the things that appeal to me about DiscountASP’s offering are the following… Basic management / planning facilities like to do lists inside Visual Studio Daily backup of data on the server – we are developers, not IT managers and so the more of this I could outsource the better Distributed solution – all of us work remotely and so this was a big one as well. Registering and Setting Up with DiscountASP.NET The whole registration process was simple and intuitive. The web interface is not the most visually impressive one, but it is functional and a few seconds after I clicked the last submit button a email was sitting in my inbox giving me my control panel username and suggesting that I read the “Getting Started” article. The getting started article was easy to read and understand so no complaints there either. Next to set my dev environment to work. With a few references to the getting started article I had completed the whole setup process in a matter of minutes. Ten minutes after initiating the whole thing I was logged into VS2010 and creating my first TFS project. With the service that I signed up for, I have access for 5 users – which is sufficient for my internal needs. So from what I can tell, to set the rest of us up on the system I just need to supply them with their user credentials and url. My Concerns Resolved 1) Security So, a few concerns I had about the service. First and foremost – is it secure? I would hate for someone to get access to our code and the whole idea of putting it up on the internet is a concern for me. Turning to the Knowledge Base on the DiscountASP website this is one of the first question I can see answered. According to them it is secure. I have extracted their comment below regarding this. Our TFS hosting service is secure. We only accept HTTPS connections ensuring that any client-server data transmission is encrypted. At the network level, all of our systems are protected by multiple Juniper firewalls, Tipping Point's Intrusion Detection System (see Tipping Point's case study of our use here), and we also employ DDoS mitigation to add extra layers of security. Additionally, physical access to the servers is tightly restricted. Please see the security section of this Knowledge Base article for further details. 2) Web Portal Access The other big concern I have is regarding web portal access. In the ideal world I would like to be able to give my end users access to a web portal for reporting bugs etc. When I initially read through the FAQ of the site it mentioned that there was web portal access – but from what I can see this is just for “users”. Since I am limited to 5 users for the account, it would not be practical to set up external users that we could get feedback from on bugs etc. I would be interested if this is possible – and if so if someone could post it in the comments it would be much appreciated. If this isn’t possible, it is a slight let down as we rely heavily on end user feedback to get feedback and it would have been ideal to have gotten this within the service. Other than those two items, I didn’t have any real concerns that were unresolved. So where do I go from here? So time passed by from the initial writing of this post and as work whirred in and out of my inbox I have still not had a proper opportunity to give the service a test run. Recently though things have began to slow down and then surprise surprise I had another SVN Hell day. With that experience I had a new found resolve to get our team on TFS and so today we are going to start to use the service as a team. I am hoping that I do not have TFS hell days – but if I do, I will be sure to write about them. In short - the verdict is still out on whether this service is going to be invaluable to my business or whether it will create more headaches than it is worth BUT I am hopping it will be an invaluable service. I will only really be able to determine that in a few months… till then!

    Read the article

  • SCSF for Visual Studio 2010

    - by Anthony Trudeau
    The Smart Client Software Factory (SCSF) for Visual Studio 2010 was uploaded tonight.  You can get it, the source code, and the documentation on the patterns & practices page. Note: Do not forget to "unblock" the documentation (CHM) file after you download it.  To unblock it right click the file, choose Properties, and click the Unblock button.

    Read the article

  • Top 5 PHP Frameworks That You Should Be Aware About

    The offshore application development scenario has transmuted into frenzy due to the inception of PHP, a widely used open source scripting language especially suited to the building of dynamic web pag... [Author: Chintan Shah - Web Design and Development - May 07, 2010]

    Read the article

  • Installing MOSS 2007 on Windows 2008 R2

    - by Manesh Karunakaran
    When you try to install MOSS 2007 on Windows 2008 R2, if you are using an installation media that is older than SP2, you would get the following error, saying that “This program is blocked due to compatibility issues”    All is not lost though, all you need to do is to slip stream the SP2 updates to the MOSS 2007 Setup. Here’s a nice how to on how to do that. http://blogs.technet.com/seanearp/archive/2009/05/20/slipstreaming-sp2-into-sharepoint-server-2007.aspx Once you slipstream the SP2 updates, you would be able to continue with the installation with out the above error. HTH.   You may already read from blogs about April Cumulative Update for separate components in SharePoint. Now, the server-packages (also known as “Uber” packages) of April Cumulative Update for Microsoft Office SharePoint Server 2007 and Windows SharePoint Services 3.0 are ready for download. Download Information Windows SharePoint Services 3.0 April cumulative update package http://support.microsoft.com/hotfix/KBHotfix.aspx?kbnum=968850 Office SharePoint Server 2007 April cumulative update package http://support.microsoft.com/hotfix/KBHotfix.aspx?kbnum=968851 Detail Description Description of the Windows SharePoint Services 3.0 April cumulative update package http://support.microsoft.com/kb/968850 Description of the Office SharePoint Server 2007 April cumulative update package http://support.microsoft.com/kb/968851 Installation Recommendation for a fresh SharePoint Server To keep all files in a SharePoint installation up-to-date, the following sequence is recommended. Service Pack 2 for Windows SharePoint Services 3.0 Service Pack 2 for Office SharePoint Server 2007 April Cumulative Update package for Windows SharePoint Services 3.0 April Cumulative Update package for Office SharePoint Server 2007 Please note: Start from April Cumulative Update, the packages will no longer install on a farm without a service pack installed. You must have installed either Service Pack 1 (SP1) or SP2 prior to the installation of the cumulative updates. After applying the preceding updates, run the SharePoint Products and Technologies Configuration Wizard or “psconfig –cmd upgrade –inplace b2b -wait” in command line. This needs to be done on every server in the farm with SharePoint installed.  The version of content databases should be 12.0.6504.5000 after successfully applying these updates. For more in-depth guidance for the update process, we recommend that customers refer to the following articles. These articles provide a correct way to deploy updates, identify known issues (and resolutions), and provide information about creating slipstream builds. Deploy software updates for Windows SharePoint Services 3.0 http://technet.microsoft.com/en-us/library/cc288269.aspx Deploy software updates for Office SharePoint Server 2007 http://technet.microsoft.com/en-us/library/cc263467.aspx Create an installation source that includes software updates (Windows SharePoint Services 3.0) http://technet.microsoft.com/en-us/library/cc287882.aspx Create an installation source that includes software updates (Office SharePoint Server 2007) http://technet.microsoft.com/en-us/library/cc261890.aspx

    Read the article

  • Alternatives for comparing data from different databases

    - by Alex
    I have two huge tables on separate databases. One of them has the information of all the SMS that passed through the company's servers while the other one has the information of the actual billing of those SMS. My job is to compare samples of both of these tables (for example, the records between 1 and 2 pm) to see if there are any differences: SMS that were sent but not charged to the user for whatever reason that may be happening. The columns I will be using to compare are the remitent's phone number and the exact date the SMS was sent. An issue here is that dates usually are the same on both sides, but in many cases differ by 1 or 2 seconds. I have, so far, two alternatives to do this: (PL/SQL) Create two tables where i'm going to temporarily store all the records of that 1hour sample. One for each of the main tables. Then, for each distinct phone number, select the time of every SMS sent from that phone from both my temporary tables and start comparing one by one using cursors. In this case, the procedure would be ran on the server where one of the sources is so the contents of the other one would be looked up using a dblink. (sqlplus + c++) Instead of storing the 1hour samples in new tables, output the query to a text file. I will have two text files, one for each source. Then, open the first file and load all of it's content on a hash_map (key-value) using c++, where the key will be the phone number and the value a list of times of SMS sent from that phone. Finally, open the second file, grab each line (in this format: numberX timeX), look for numberX's entry on the hash_map (wich will be a list of times) and then check if timeX is on that list. If it isn't, save it somewhere to finally store it on a "uncharged" table (this would also be the final step on case 1) My main concern is efficiency. These samples have about 2 million records on each source, so just grabbing one record on one side and looking it up on the other would not be possible. That's the reason I wanted to use hash_maps Which do you think is a better option?

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-14

    - by Bob Rhubart
    Duke's Choice Award Nominations Close Friday! | The Java Source The Duke's Choice Awards celebrate extreme innovation in the world of Java technology. Nominate an individual, a group or company who show the best in Java innovation. Nominate at Java.net/dukeschoice. Nominations are open until this Friday, June 15. Whole Lotta Virtualization Goin' On | Rick Ramsey The OTN Garage's Rick Ramsey shares a list of recent Virtualization articles available on OTN, along with a link to a video by The Killer, Mr Jerry Lee Lewis. A Pragmatic Path to Navigating your Infrastructure to the Cloud | The WebLogic Server Blog Ruma Sanyal offers an overview of a recent Oracle webcast featuring Gartner VP and Distinguished Analyst Andy Butler and Vice President and Gartner Fellow Massimo Pezzini. Migrating C/C++ embedded SQL code | Tom Laszewski Cloud migration expert Tom Laszewski explains the how-to in 5 easy steps. Aetna Dumps Its Siloed Enterprise Architecture for SOA | CIO.com CIO writer Stephanie Overby tells the story of how one major health insurance provider put the "Enterprise" back in Enterprise Architecture. (H/T to Joe McKendrick for this story.) Downloading specific video renditions in WebCenter Content | Kyle Hatlestad How-to from Oracle WebCenter & ADF A-Team blogger Kyle Hatlestad. Eclipse DemoCamp - June 2012 - Redwood Shores, CA Location: Oracle HQ - 10 Twin Dolphin Drive, Redwood Shores, CA (Map) Date and Time: Wednesday, June 13, 2012. From 6pm - 9pm Agenda: The evolution of Java persistence, Doug Clarke, EclipseLink Project Lead, Oracle Integrating BIRT into Applications, Ashwini Verma, Actuate Corporation Leveraging OSGi In The Enterprise, Kamal Muralidharan, Lead Engineer, eBay Developing Rich ADF Applications with Java EE, Greg Stachnick, Oracle NVIDIA® NsightTM Eclipse Edition, Goodwin (Tech lead - Visual tools), Eugene Ostroukhov (Senior engineer – Visual tools) 2012 Oracle Fusion Middleware Innovation Awards - Win a FREE Pass to Oracle OpenWorld 2012 in San Francisco Share your use of Oracle Fusion Middleware solutions and how they help your organization drive business innovation. You just might win a free pass to Oracle Openworld 2012 in San Francisco. Deadline for submissions in July 17, 2012. BI Architecture Master Class for Partners – Oracle Architecture Unplugged Date: June 21, 2012 No slides, no fluff. This workshop will be highly interactive and is aimed at Oracle OPN member partners who are IT Architects and BI+W specialists. The focus will be on architectural issues and considerations. DevOps: Evolving to Handle Disruption | JP Morgenthal The subject of DevOps came up this week during an OTN ArchBeat podcast interview with Ron Batra and James Baty on the role of the cloud architect (that program will be available in a few weeks). Morgenthal's article for InfoQ offers a good overview of what DevOps is and how it works. Thought for the Day "Elegance is not a dispensable luxury but a factor that decides between success and failure." — Edsger Dijkstra Source: softwarequotes.com

    Read the article

  • What version control system can manage all aspects?

    - by Andy Canfield
    A few months ago I dug into Subversion and GIT and was disappointed. They handle SOURCE CODE fine but not other aspects. For example, a web site under version control needs to manage file/directory ownership, file/directory read & write access, Access Control Lists, timestamps, database contents. and external links. Is there a version control system that can do as perfect a reversion as reloading from a month-old backup?

    Read the article

< Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >