Scala's traits come nowhere near the functionalities offered by dependency injection frameworks such as Guice. When Scala encounters the need for DI, it's very likely that it will roll out a framework that will look a lot like Guice.
Your point regarding the rope is well taken, the difference being that with Java, the rope lies in the frameworks, while in Scala, it lies both in the frameworks (take a look at Lift) and in the language.
Well, for a big enough project something like Guice will become necessary sooner or later. However, for smaller programs traits are simpler to use and get one 80% of Guice's functionality--and it's all built into the language. Traits are relatively easy to understand that any reasonably competent developer should be able to easily reason about. I don't see why a Scala-native DI framework wouldn't be able to leverage this existing, simple solution to build the last 20%.
Guice is the result of smart Google engineers who had probably tried all sorts of DI frameworks before and finally developed something nice. It's the result of more than a decade of painful experiences with existing solutions to a real problem.
In conclusion, I think you can hang yourself when using any language. In Scala, when I needed to mock out a dependency for a project, I settled on a fairly simple trait-based approach that was simple and elegant. In Java, I could have rolled by own Factory scheme that would be ugly and unmaintainable, or picked a DI framework that hopefully wouldn't have sucked. In this case I was presented with many fewer opportunities to make a mess than if I had been using Java.
All fair points, but I'm really curious to see what you did with traits that has anything to do with dependency injection. DI is essentially a runtime activity while traits are just a more granular way of assembling your objects together at compile time. I just don't see the intersection between the two concepts.
I have the feeling that you might be confused about what DI is exactly, but I'll happily eat my words if you can point me to a description of what you did, some source code, a blog post or whatever.
My approach was thus: basically, define an abstract trait that specifies methods that return given dependencies. Any class that required dependencies extends this trait and is itself abstract. Then, when you want a real working instance of the class you mix in a concrete implementation of the trait. It's simple, declarative, and it got the job done. Here's an example of the pattern:
abstract class Foo {
def addToMe(i: Int):Int
}
abstract trait FooService { def foo: Foo }
class RealFoo(j: Int) extends Foo {
def addToMe(i: Int) = i + j
}
trait RealFooService extends FooService { def foo = new RealFoo(42) }
abstract class NeedsFoo extends FooService {
def doSomething = foo.addToMe(100)
}
(new NeedsFoo with RealFooService).doSomething
// Int = 142
class TestFoo extends Foo {
def addToMe(i: Int) = i + 1
}
trait TestFooService extends FooService { def foo = new TestFoo }
(new NeedsFoo with TestFooService).doSomething
// Int = 101
Now, this is all done at compile-time, but it really seems to accomplish most of what the first few examples in the Guice documentation do, with similar amounts of boilerplate. Again, I'm sure this approach would fall short in many more complicated scenarios, but it gets you a hell of a lot farther than Java does in solving this problem without adding any extra frameworks. Traits are also not very difficult to reason with.
Your point regarding the rope is well taken, the difference being that with Java, the rope lies in the frameworks, while in Scala, it lies both in the frameworks (take a look at Lift) and in the language.